id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,615,410,909
flutter
`NavigationBar` should allow fixed size centered destinations in landscape mode
### Use case In landscape mode on taller devices having 3 to 4 destinations present a bad UI by expanding all destinations to take the full width. Without having a fixed width option, we are forced to use `BottomNavigationBar` which might be deprecated in the future as per this [issue](https://github.com/flutter/flutter/issues/124885) ### Proposal `NavigationBar` should allow fixed size centered destinations in landscape mode, which also complies with [M3 specs](https://m3.material.io/components/navigation-bar/guidelines#76819d14-8cba-4fd8-96dd-1ef5dca268d4) > In landscape-oriented mobile screens, navigation bar destinations can retain the same spacing used in portrait mode, or be equally distributed across the container. In older `BottomNavigationBar` we used to have `BottomNavigationBarLandscapeLayout.centered` to serve the same purpose, which seems was not ported to `NavigationBar` during M3 migration
c: new feature,framework,f: material design,c: proposal,P2,team-design,triaged-design
low
Minor
2,615,431,567
vscode
Remote Tunnel VSCode Server install fails if WSL default shell is not Bash
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: N/A <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: - OS Version: Steps to Reproduce: 1. Set WSL2 default shell (Ubuntu 24.04.1 LTS) to something other than Bash. [I primarily use non-POSIX shells like Fish, Nushell, and Elvish.] 2. Enable Remote Tunnel with Microsoft account on Windows 11 with WSL2. 3. Sign in to Microsoft account on vscode.dev, connect to Remote Tunnel, see that you can open folders and edit files from the Windows filesystem as expected. 4. Select Connect to WSL from the Command Palette, watch VSCode attempt to install/start the remote code server instance. 5. Watch the crashloop begin: ![Image](https://github.com/user-attachments/assets/6e799b77-e8ac-4a77-9250-df8be4ac91dc) The error in the attached screenshot is effectively fatal, reloading the window over and over. 6. Close the tab. 7. Change the default shell for WSL2 to Bash (or possibly some other POSIX-compliant shell). 8. Reopen vscode.dev, connect to Remote Tunnel, Connect to WSL, watch the remote server installation succeed, then verify that you can open directories and edit files as expected. I'm not sure what's going on behind the scenes with that code server install, but since launching `code` from WSL2 in any of the non-POSIX shells I use works just fine (i.e., delete `~/.vscode-server`, launch `code`, watch the magic happen), it suggests an assumption that the default shell will be Bash (or at least something POSIX-compliant) for the installation and connection via Remote Tunnel.
bug,info-needed,remote-tunnel
low
Critical
2,615,445,218
ui
[feat]: sidebar color option
### Feature description The 'themes' page does not include a color option. Therefore, it does not follow the custom color theme. I think it needs options like these: ``` --sidebar-background: 222.2 84% 4.9%; --sidebar-accent: 217.2 32.6% 17.5%; ``` https://ui.shadcn.com/themes ### Affected component/components SideBar ### Additional Context Additional details here... ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues and PRs
area: request
low
Minor
2,615,450,505
kubernetes
Add QueueingHint for VolumeAttachment deletion events in CSI volume limits plugin
### What would you like to be added? In #128337, we added `VolumeAttachment` deletion event to CSI volume limits plugin's `EventsToRegister`. We should add a corresponding `QueueingHintFn` to make requeueing more efficient by allowing the plugin to filter out useless events. /sig scheduling ### Why is this needed? This is part of the effort to add `QueueingHints` to all scheduler plugins (#118893) to improve scheduling performance by preventing unnecessary retries.
sig/scheduling,sig/storage,kind/feature,needs-triage
low
Major
2,615,474,557
tauri
[bug] failed overriding protocol method -[WKUIDelegate webView:requestMediaCapturePermissionForOrigin:initiatedByFrame:type:decisionHandler:]: method not found
### Describe the bug npm run tauri dev > tauri-app@0.1.0 tauri > tauri dev Running BeforeDevCommand (`npm run dev`) > tauri-app@0.1.0 dev > vite Warn Waiting for your frontend dev server to start on http://localhost:1420/... Re-optimizing dependencies because lockfile has changed VITE v5.4.10 ready in 3410 ms ➜ Local: http://localhost:1420/ Info Watching /Users/peipeng.yuan.o/workspace/tauri-app/src-tauri for changes... Compiling objc-sys v0.3.5 Compiling objc2 v0.5.2 Compiling block2 v0.5.1 Compiling objc2-foundation v0.2.2 Compiling objc2-app-kit v0.2.2 Compiling objc2-web-kit v0.2.2 Compiling window-vibrancy v0.5.2 Compiling muda v0.15.1 Compiling wry v0.46.3 Compiling tauri-runtime-wry v2.1.2 Compiling tauri v2.0.6 Compiling tauri-plugin-shell v2.0.2 Compiling tauri-app v0.1.0 (/Users/peipeng.yuan.o/workspace/tauri-app/src-tauri) Finished `dev` profile [unoptimized + debuginfo] target(s) in 54.44s thread 'main' panicked at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/__macro_helpers/declare_class.rs:339:21: failed overriding protocol method -[WKUIDelegate webView:requestMediaCapturePermissionForOrigin:initiatedByFrame:type:decisionHandler:]: method not found note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread 'main' panicked at core/src/panicking.rs:221:5: panic in a function that cannot unwind stack backtrace: 0: 0x1029e90b6 - std::backtrace_rs::backtrace::libunwind::trace::hae28e9d8ee6f0f1b at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5 1: 0x1029e90b6 - std::backtrace_rs::backtrace::trace_unsynchronized::h97ac52ce5b001ab9 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5 2: 0x1029e90b6 - std::sys::backtrace::_print_fmt::h121f81f3e644bc1c at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/sys/backtrace.rs:66:9 3: 0x1029e90b6 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hcaf66bc4c0c453df at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/sys/backtrace.rs:39:26 4: 0x102a0aedb - core::fmt::rt::Argument::fmt::hfeba5accca924a15 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/fmt/rt.rs:177:76 5: 0x102a0aedb - core::fmt::write::hc9c5f1836b413410 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/fmt/mod.rs:1178:21 6: 0x1029e5b92 - std::io::Write::write_fmt::h49df280499063c09 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/io/mod.rs:1823:15 7: 0x1029ea3c8 - std::sys::backtrace::BacktraceLock::print::he68bab4b1e212a89 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/sys/backtrace.rs:42:9 8: 0x1029ea3c8 - std::panicking::default_hook::{{closure}}::h52c0b2f44f6107c5 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:266:22 9: 0x1029ea00e - std::panicking::default_hook::h5a6cf31501c161b2 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:293:9 10: 0x1029eb0e3 - std::panicking::rust_panic_with_hook::hda4640ee332466e9 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:797:13 11: 0x1029ea982 - std::panicking::begin_panic_handler::{{closure}}::haa3060694b34ea3d at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:664:13 12: 0x1029e9599 - std::sys::backtrace::__rust_end_short_backtrace::h8eb44913cfe71457 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/sys/backtrace.rs:170:18 13: 0x1029ea5fc - rust_begin_unwind at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5 14: 0x102a3d25c - core::panicking::panic_nounwind_fmt::runtime::h192b7a91100d1ba6 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:112:18 15: 0x102a3d25c - core::panicking::panic_nounwind_fmt::hfed0a2f12e4318b1 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:122:5 16: 0x102a3d30a - core::panicking::panic_nounwind::h659854746b9fc37d at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:221:5 17: 0x102a3d525 - core::panicking::panic_cannot_unwind::hd982c2e1ccbf0ef4 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:310:5 18: 0x10242e9ca - tao::platform_impl::platform::app_delegate::did_finish_launching::h1e4a275343ef8f00 at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tao-0.30.3/src/platform_impl/macos/app_delegate.rs:103:1 19: 0x7fff20495463 - <unknown> 20: 0x7fff20530ed9 - <unknown> 21: 0x7fff20530e54 - <unknown> 22: 0x7fff204666ce - <unknown> 23: 0x7fff211d9c18 - <unknown> 24: 0x7fff22cb0d80 - <unknown> 25: 0x7fff22cb0ad2 - <unknown> 26: 0x7fff22cadc71 - <unknown> 27: 0x7fff22cad8c7 - <unknown> 28: 0x7fff21205366 - <unknown> 29: 0x7fff212051d6 - <unknown> 30: 0x7fff26282853 - <unknown> 31: 0x7fff26281f6e - <unknown> 32: 0x7fff2627acd3 - <unknown> 33: 0x7fff286fa012 - <unknown> 34: 0x7fff22ca7f70 - <unknown> 35: 0x7fff22ca62a5 - <unknown> 36: 0x7fff22c985c9 - <unknown> 37: 0x10246aff1 - <() as objc::message::MessageArguments>::invoke::h99676cb39a047150 at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc-0.2.7/src/message/mod.rs:128:17 38: 0x102469bbd - objc::message::platform::send_unverified::h00753cbb8fff113e at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc-0.2.7/src/message/apple/mod.rs:27:9 39: 0x1020f0295 - objc::message::send_message::h699c43327ef8383e at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc-0.2.7/src/message/mod.rs:178:5 40: 0x1020f0295 - tao::platform_impl::platform::event_loop::EventLoop<T>::run_return::h9fc62e1d792f5527 at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tao-0.30.3/src/platform_impl/macos/event_loop.rs:225:16 41: 0x1020f1161 - tao::platform_impl::platform::event_loop::EventLoop<T>::run::hd25c501f4a144ef6 at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tao-0.30.3/src/platform_impl/macos/event_loop.rs:192:21 42: 0x102161a88 - tao::event_loop::EventLoop<T>::run::hb2375ddcff1fbe5c at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tao-0.30.3/src/event_loop.rs:215:5 43: 0x101bb2c1b - <tauri_runtime_wry::Wry<T> as tauri_runtime::Runtime<T>>::run::hc6d4738815a0d29e at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-runtime-wry-2.1.2/src/lib.rs:2726:5 44: 0x101bb931a - tauri::app::App<R>::run::h7d298a0e7089e2ef at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.0.6/src/app.rs:1129:5 45: 0x101bb9df5 - tauri::app::Builder<R>::run::h3f745a7e6b0402fa at /Users/peipeng.yuan.o/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.0.6/src/app.rs:2052:5 46: 0x10218d3f9 - tauri_app_lib::run::h53568fe4fe82bff6 at /Users/peipeng.yuan.o/workspace/tauri-app/src-tauri/src/lib.rs:9:5 47: 0x10192cc89 - tauri_app::main::hf333b231e8f5fa62 at /Users/peipeng.yuan.o/workspace/tauri-app/src-tauri/src/main.rs:5:5 48: 0x10192cd6e - core::ops::function::FnOnce::call_once::h36fa293f72247906 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:250:5 49: 0x10192cc71 - std::sys::backtrace::__rust_begin_short_backtrace::h7639879eaae778c1 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/sys/backtrace.rs:154:18 50: 0x10192cd24 - std::rt::lang_start::{{closure}}::h4cda7cd945a37f68 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs:164:18 51: 0x1029e141f - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::h57d1f2389e2919e9 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:284:13 52: 0x1029e141f - std::panicking::try::do_call::h47b836309ad2925a at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:554:40 53: 0x1029e141f - std::panicking::try::hcc79b4069713f450 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:518:19 54: 0x1029e141f - std::panic::catch_unwind::h34090fdb34bc5e00 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panic.rs:345:14 55: 0x1029e141f - std::rt::lang_start_internal::{{closure}}::hb9c7217a630c43c9 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs:143:48 56: 0x1029e141f - std::panicking::try::do_call::h43e7c27d402b50a4 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:554:40 57: 0x1029e141f - std::panicking::try::h4d79f623873fbc60 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:518:19 58: 0x1029e141f - std::panic::catch_unwind::hb32f8cfbf49d3dee at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panic.rs:345:14 59: 0x1029e141f - std::rt::lang_start_internal::hb1473645dbe40065 at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs:143:20 60: 0x10192ccf7 - std::rt::lang_start::h74b59529f543ed2f at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs:163:17 61: 0x10192cca8 - _main thread caused non-unwinding panic. aborting. ### Reproduction _No response_ ### Expected behavior _No response_ ### Full `tauri info` output ```text tauri info [✔] Environment - OS: Mac OS 11.7.10 x86_64 (X64) ✔ Xcode Command Line Tools: installed ✔ rustc: 1.82.0 (f6e511eec 2024-10-15) ✔ cargo: 1.82.0 (8f40fc59f 2024-08-21) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-apple-darwin (default) - node: 20.16.0 - pnpm: 9.10.0 - yarn: 1.22.19 - npm: 10.8.1 [-] Packages - tauri 🦀: 2.0.6 - tauri-build 🦀: 2.0.2 - wry 🦀: 0.46.3 - tao 🦀: 0.30.3 - @tauri-apps/api : 2.0.3 - @tauri-apps/cli : 2.0.5 [-] Plugins - tauri-plugin-shell 🦀: 2.0.2 - @tauri-apps/plugin-shell : 2.0.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - framework: Vue.js - bundler: Vite ``` ### Stack trace _No response_ ### Additional context _No response_
type: bug,status: upstream,status: needs triage
low
Critical
2,615,479,572
next.js
Next.js 15 import alias not working with turbopack
### Link to the code that reproduces this issue https://github.com/aelassas/next-turbopack ### To Reproduce Create a Next.js 15 project with an internal package `package1`: ``` | - my-app/ | - packages/package1/ ``` Add the alias in tsconfig.json: ```js { "compilerOptions": { ... "baseUrl": "./", "paths": { "@/*": ["./src/*"], ":package1": ["../packages/package1"], }, } } ``` Import `:package1` in a page or component: ```js import * as package1 from ':package1' ... ``` Run the dev server with turbopack. ### Current vs. Expected behavior When I run the dev server without turbopack, it works fine. But when I try with turbopack I always get this error: ``` Module not found: Can't resolve ':package1' Import map: aliased to relative "../packages/package1" inside of [project]/ ``` ### Provide environment information ```bash Platform: win32 Arch: x64 Version: Windows 11 Pro Available memory (MB): 32593 Available CPU cores: 12 Binaries: Node: 20.17.0 npm: N/A Yarn: N/A pnpm: N/A Relevant Packages: next: 15.0.2-canary.7 // Latest available version is detected (15.0.2-canary.7). eslint-config-next: N/A react: 19.0.0-rc-1631855f-20241023 react-dom: 19.0.0-rc-1631855f-20241023 typescript: 5.3.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Turbopack ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context _No response_
bug,Turbopack,linear: turbopack
low
Critical
2,615,501,650
pytorch
What version or package name will be used in aarch64 release for 2.6 on pypi?
### 🐛 Describe the bug An +cpu suffix is added to aarch64 (cpu) nightly package. https://github.com/pytorch/pytorch/pull/138588 Before ``` # https://download.pytorch.org/whl/nightly/cpu/torch-2.6.0.dev20241022-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl >>> torch.__version__ '2.6.0.dev20241022' ``` After ``` # https://download.pytorch.org/whl/nightly/cpu/torch-2.6.0.dev20241023%2Bcpu-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl >>> torch.__version__ '2.6.0.dev20241023+cpu' ``` For the name of aarch64 (cpu) package does not contain +cpu suffix on pypi in previous version, what version or package name will be used in aarch64 release for 2.6 on pypi? ### Versions PyTorch version: 2.6.0.dev20241023+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: CentOS Linux 7 (AltArch) (aarch64) GCC version: (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11) Clang version: Could not collect CMake version: version 3.30.1 Libc version: glibc-2.17 Python version: 3.11.9 (main, Jul 15 2024, 06:10:42) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (64-bit runtime) Python platform: Linux-4.18.0-80.7.2.el7.aarch64-aarch64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Model: 0 CPU max MHz: 2600.0000 CPU min MHz: 2600.0000 BogoMIPS: 200.00 L1d cache: 64K L1i cache: 64K L2 cache: 512K L3 cache: 32768K NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm Versions of relevant libraries: [pip3] numpy==1.24.4 [pip3] torch==2.6.0.dev20241023+cpu [conda] Could not collect cc @ezyang @gchanan @zou3519 @kadeng @msaroufim
high priority,oncall: releng,triaged,module: regression
low
Critical
2,615,522,017
node
When using microtaskMode: 'afterEvaluate', the program gets stuck in the vm and cannot return to the main program.
### Version v22.9.0 ### Platform ```text ubuntu 22.04 ``` ### Subsystem _No response_ ### What steps will reproduce the bug? When using microtaskMode: 'afterEvaluate', the program gets stuck in the vm and cannot return to the main program. ![image](https://github.com/user-attachments/assets/5c3a4eec-b4b6-4ba4-b217-4fd2adf018f1) mod is vm.script, I didn't set the execution timeout, if I use await here, he will be stuck here all the time ### How often does it reproduce? Is there a required condition? every time ### What is the expected behavior? Why is that the expected behavior? When using microtaskMode: 'afterEvaluate', the program gets stuck in the vm and cannot return to the main program. ### What do you see instead? When using microtaskMode: 'afterEvaluate', the program gets stuck in the vm and cannot return to the main program. ### Additional information When using microtaskMode: 'afterEvaluate', the program gets stuck in the vm and cannot return to the main program.
vm
low
Critical
2,615,527,206
pytorch
When using torch.jit.trace for inference, the element of the shape attribute of tensor and the return value of numel() are tensor
### 🐛 Describe the bug ```python import torch import torch.nn as nn class DemoModel(nn.Module): def __init__(self): super(DemoModel, self).__init__() self.conv = nn.Conv2d(3, 3, 3, 1, 1) def forward(self, x): print(x.shape[0]) print(x.numel()) x = self.conv(x) return x model = DemoModel() input = torch.randn(8, 3, 32, 32) trace = torch.jit.trace(model, input) >>> tensor(8) >>> tensor(24576) >>> tensor(8) >>> tensor(24576) >>> 8 >>> 24576 ``` ### Versions Collecting environment information... PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU Nvidia driver version: 560.94 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 7 7840H with Radeon 780M Graphics CPU family: 25 Model: 116 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 1 BogoMIPS: 7585.29 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid Virtualization: AMD-V Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 8 MiB (8 instances) L3 cache: 16 MiB (1 instance) Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] torch==2.4.1 [pip3] torchaudio==2.4.1 [pip3] torchvision==0.19.1 [pip3] triton==3.0.0 [conda] Could not collect cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
oncall: jit
low
Critical
2,615,552,180
neovim
Insertion after expansion of snippet starts at wrong place
### Problem Original issue: https://github.com/Saghen/blink.cmp/issues/94 Many Lsp's auto import module before starting the completion, but doing so results in the starting of insertion at the wrong place, tested with clangd and gopls https://github.com/user-attachments/assets/08c26e7c-a8de-4736-860b-feb7ea43cc42 ### Steps to reproduce 1. nvim --clean random.txt 2. paste this into the file: ```lua vim.snippet.expand('test(${1:ing})') vim.lsp.util.apply_text_edits({ { range = { start = { line = 0, character = 0 }, ['end'] = { line = 0, character = 0 } }, newText = 'hello world\n', }, }, vim.api.nvim_get_current_buf(), 'utf-16') ``` 3. press `o` on the last line 4. then do `luafile %` ### Expected behavior the snippet expansion should start from the correct place ### Nvim version (nvim -v) NVIM v0.11.0-dev-1043+ge4a74e986 ### Vim (not Nvim) behaves the same? dont think vim has snippet expansion ### Operating system/version Arch Linux ### Terminal name/version Ghostty ### $TERM environment variable xterm-ghostty ### Installation bob
bug,snippet
low
Minor
2,615,575,690
next.js
Module '"next" has no exported member "X" (eg, 'Image', 'NextConfig').
### Link to the code that reproduces this issue https://github.com/nextui-org/next-app-template ### To Reproduce 1. Upgrade to `v15` from `v14` 2. Enable turbopack 3. start dev server *using typescript ### Current vs. Expected behavior ## Expected behevior - Import stuff (like Image, Metadata) from `next` package ## Currently - you can't ## Though you can: (**Fix**) 1. Disable turbopack 2. start dev server ### Provide environment information ```bash Operating System: Platform: linux Arch: x64 Version: #1 ZEN SMP PREEMPT_DYNAMIC Tue, 22 Oct 2024 18:31:33 +0000 Binaries: Node: 20.18.0 npm: N/A Yarn: N/A pnpm: 9.12.2 Relevant Packages: next: 15.0.1 eslint-config-next: 15.0.1 react: 19.0.0-rc-cae764ce-20241025 react-dom: 19.0.0-rc-cae764ce-20241025 typescript: 5.6.3 ``` ### Which area(s) are affected? (Select all that apply) Module Resolution ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context _No response_
bug,Module Resolution
low
Minor
2,615,594,929
vscode
Uninstall Copilot leaves the sidebar entirely empty without notice
Steps to Reproduce: 1. install copilot 2. secondary sidebar shows 3. uninstall => 🐛 secondary sidebar is empty without notice to drop a view ![Image](https://github.com/user-attachments/assets/afc8aed8-558c-48d8-9a0b-72c6709dd660)
bug,layout,workbench-auxsidebar
low
Minor
2,615,595,803
godot
Being able to access a non-existing member in another class during the static analysis of GDScript
### Tested versions -Reproducible in 4.3 and 4.4dev -Perhaps reproducible in previous versions ### System information Windows 11 ### Issue description ```GDScript class D: var q = 1 class E: var d = D.new() class F: var e = E.new() func _init() -> void: e.d._q = 5 ``` In this example, q is the member of D. However, no matter what you input after `e.d.`, the error will never be triggered. ### Steps to reproduce Copy the example code and paste it in a random script. ### Minimal reproduction project (MRP) [inner-class-bug.zip](https://github.com/user-attachments/files/17529261/inner-class-bug.zip)
discussion,topic:gdscript,documentation
low
Critical
2,615,597,762
pytorch
compile_fx inplace modifly the input graph
### 🐛 Describe the bug I'm compiling a graph multiple times using inductor. I find it inplace modify the graph, and one of the graph's output changes from tensor to list of tensor. example code: ```python import torch from torch import nn @torch.library.custom_op("silly::attention", mutates_args=["out"]) def silly_attention(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, out: torch.Tensor) -> None: print("silly") out.copy_(q) out[0] += 1 @silly_attention.register_fake def _(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, out: torch.Tensor) -> None: return class SillyModel(nn.Module): def __init__(self) -> None: super().__init__() def forward( self, x: torch.Tensor ) -> torch.Tensor: x = x + 1 x = x + 2 out = torch.empty_like(x) torch.ops.silly.attention(x, x, x, out) x = out x = x - 2 x = x - 1 out = torch.empty_like(x) torch.ops.silly.attention(x, x, x, out) x = out x = x + 1 return x model = SillyModel() def mybackend(graph, example_inputs): graph.print_readable() subgraph_id = 0 node_to_subgraph_id = {} attention_graphs = [] for node in graph.graph.nodes: if node.op in ("output", "placeholder"): continue if node.op == 'call_function' and str(node.target) in ["silly.attention"]: subgraph_id += 1 node_to_subgraph_id[node] = subgraph_id attention_graphs.append(subgraph_id) subgraph_id += 1 else: node_to_subgraph_id[node] = subgraph_id # `keep_original_order` is important! # otherwise pytorch might reorder the nodes and # the semantics of the graph will change when we # have mutations in the graph split_gm = torch.fx.passes.split_module.split_module( graph, None, lambda node: node_to_subgraph_id[node], keep_original_order=True) split_gm.print_readable() for (name, module) in list(split_gm.named_modules()): if name == "": # stitching module continue if "." in name: # recursive child module continue graph_id = int(name.replace("submod_", "")) if graph_id not in attention_graphs: # cannot setattr to a module, so we need to set it to the dict split_gm.__dict__[name] = piecewise_backend( module) print("first compilation") output = split_gm(*example_inputs) print(output.__class__) return split_gm.forward def piecewise_backend(model): compiled_once = True compiled_twice = True compiled_graph_1 = None compiled_graph_2 = None def f(*args): model.print_readable() nonlocal compiled_once, compiled_twice, compiled_graph_1, compiled_graph_2 if compiled_once: compiled_once = False from torch._inductor.compile_fx import compile_fx compiled_graph = compile_fx(model, args) return compiled_graph(*args) if compiled_twice: compiled_once = False from torch._inductor.compile_fx import compile_fx compiled_graph_1 = compile_fx(model, args) return compiled_graph_1(*args) return f model = torch.compile(model, backend=mybackend) input_buffer = torch.randn(100) output = model(input_buffer) print(output.__class__) output = model(input_buffer) print(output.__class__) ``` Some key output logging: this is the last piece of graph: ```python class submod_4(torch.nn.Module): def forward(self, out_1: "f32[100]"): # File: /data/youkaichao/tmp/vllm/test_split.py:32 in forward, code: x = x + 1 add: "f32[100]" = out_1 + 1; out_1 = None return add ``` after calling `compile_fx`, it becomes: ```python class GraphModule(torch.nn.Module): def forward(self, out_1: "f32[100]"): # File: /data/youkaichao/tmp/vllm/test_split.py:32 in forward, code: x = x + 1 add: "f32[100]" = out_1 + 1; out_1 = None return [add] ``` is it a desired behavior? if I want to avoid it, how to clone a graph, without cloning the module and parameters, if I want to send a copy to `compile_fx`? ### Versions pytorch 2.4 , 2.5, and 2.6.0.dev20241022+cu124 cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
triaged,oncall: pt2,module: inductor
low
Critical
2,615,613,084
deno
Will Deno support node_api_basic_finalize?
It is still experimental in node, but I think it is a fantastic feature to simplify ffi modules. https://github.com/nodejs/node/blob/0668e64cea127d8d4fa35d1b49bf11093ecc728f/doc/api/n-api.md?plain=1#L614 It allows finalisers to run outside the js context and so executed immediately as soon as the object is gc'ed, no need to wait for js to yield.
suggestion,node compat,node native extension
low
Minor
2,615,614,653
tensorflow
`data_flow_ops.Barrier` aborts with "Check failed: i >= 0 (0 vs. -100)"
### Issue type Bug ### Have you reproduced the bug with TensorFlow Nightly? Yes ### Source binary ### TensorFlow version tf-nightly 2.19.0-dev20241025 ### Custom code Yes ### OS platform and distribution Linux Ubuntu 20.04.3 LTS ### Mobile device _No response_ ### Python version _No response_ ### Bazel version _No response_ ### GCC/compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current behavior? I have confirmed that below code would crash on `tf-nightly 2.19.0-dev20241025` (nightly-build) Please find the [gist](https://colab.research.google.com/drive/1OOOIvZ7brRDjRqshv36CA_55BlPbgI4e?usp=sharing) to reproduce the issue. ### Standalone code to reproduce the issue ```shell from tensorflow.python.framework import dtypes from tensorflow.python.ops import data_flow_ops from tensorflow.python.eager import context import tensorflow as tf with context.graph_mode(): sess = tf.compat.v1.Session() with sess.as_default(): b = data_flow_ops.Barrier((dtypes.float32, dtypes.float32), shapes=((), ()), name='B') keys = [b'a', b'b', b'c', b'd'] values_0 = [10.0, 20.0, 30.0, 40.0] values_1 = [100.0, 200.0, 300.0, 400.0] insert_1_1_op = b.insert_many(-100, keys[0:2], values_1[0:2]) insert_1_1_op.run() ``` ### Relevant log output ```shell F tensorflow/core/kernels/barrier_ops.cc:286] Check failed: i >= 0 (0 vs. -100) Aborted (core dumped) ```
stat:awaiting tensorflower,type:bug,comp:ops,TF 2.18
medium
Critical
2,615,618,759
tensorflow
`gen_list_ops.tensor_list_concat_v2` aborts with "Check failed: size >= 0 (0 vs. -1)"
### Issue type Bug ### Have you reproduced the bug with TensorFlow Nightly? Yes ### Source binary ### TensorFlow version tf-nightly 2.19.0-dev20241025 ### Custom code Yes ### OS platform and distribution Linux Ubuntu 20.04.3 LTS ### Mobile device Linux Ubuntu 20.04.3 LTS ### Python version 3.10.14 ### Bazel version _No response_ ### GCC/compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current behavior? I have confirmed that below code would crash on `tf-nightly 2.19.0-dev20241025` (nightly-build) Please find the [[gist](https://colab.research.google.com/drive/1Y-ZQk0KgIGfzCMfLVdemJ2UqXLeQWKUd?usp=sharing)](https://colab.research.google.com/drive/1Y-ZQk0KgIGfzCMfLVdemJ2UqXLeQWKUd?usp=sharing) to reproduce the issue. ### Standalone code to reproduce the issue ```shell from tensorflow.python.framework import dtypes from tensorflow.python.ops import gen_list_ops from tensorflow.python.ops import list_ops l = list_ops.tensor_list_reserve(element_dtype=dtypes.float32, element_shape=None, num_elements=3) t = gen_list_ops.tensor_list_concat_v2(l, element_dtype=dtypes.float32, element_shape=list_ops._build_element_shape((None, 3)), leading_dims=[-1, 3, 5]) ``` ### Relevant log output ```shell F tensorflow/core/framework/tensor_shape.cc:587] Check failed: size >= 0 (0 vs. -1) Aborted (core dumped) ```
stat:awaiting tensorflower,type:bug,comp:ops,TF 2.18
medium
Critical
2,615,620,522
tensorflow
`lookup_ops.StaticVocabularyTable` and `lookup_ops.StaticVocabularyTableV1` can cause a crash
### Issue type Bug ### Have you reproduced the bug with TensorFlow Nightly? Yes ### Source binary ### TensorFlow version tf-nightly 2.19.0-dev20241025 ### Custom code Yes ### OS platform and distribution Linux Ubuntu 20.04.3 LTS ### Mobile device _No response_ ### Python version _No response_ ### Bazel version _No response_ ### GCC/compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current behavior? I encountered an `segmentation fault issue` in TensorFlow when I used API `lookup_ops.StaticVocabularyTable` or `lookup_ops.StaticVocabularyTableV1` . I have confirmed that below code would crash on `tf-nightly 2.19.0-dev20241025` (nightly-build). Please find the [gist](https://colab.research.google.com/drive/1NGnFQqFl6Jy_Xt7wBL3ynmSVs9AiFw_l?usp=sharing) to reproduce the issue. ### Standalone code to reproduce the issue ```shell import os from tensorflow.python.ops import lookup_ops def _createVocabFile(basename, values=('brain', 'salad', 'surgery')): vocabulary_file = os.path.join("/tmp", basename) with open(vocabulary_file, 'w') as f: f.write('\n'.join(values) + '\n') return vocabulary_file vocab_file = _createVocabFile('feat_to_id_1.txt') vocab_size = 9223372036854775807 oov_buckets = 1 table = lookup_ops.StaticVocabularyTable(lookup_ops.TextFileIdTableInitializer(vocab_file, vocab_size=vocab_size), oov_buckets, experimental_is_anonymous=True) #lookup_ops.StaticVocabularyTableV1(lookup_ops.TextFileIdTableInitializer(vocab_file, vocab_size=vocab_size), oov_buckets, experimental_is_anonymous=True) ``` ### Relevant log output ```shell Segmentation fault (core dumped) ```
stat:awaiting tensorflower,type:bug,comp:ops,TF 2.18
medium
Critical
2,615,627,665
godot
Restrictive permissions on asset folder shows up as "invalid file" errors and creates import loops
### Tested versions - Reproducible in: v4.3.stable.mono.official [77dcf97d8] ### System information macOS - Sonoma 14.6.1 (23G93) ### Issue description I moved an [asset pack](https://quaternius.com/packs/stylizednaturemegakit.html) into my project folder. For some reason, the pack comes with permission 555 on the directories. When I returned to Godot, the import entered an infinite loop. It kept triggering the import process and failing with hundreds of errors like: ``` Cannot open file from path 'res://assets/models/StylizedNatureMegaKit/Textures/Bark_DeadTree_Normal.png.import'. editor/editor_node.cpp:1278 - Condition "!res.is_valid()" is true. Returning: ERR_CANT_OPEN Cannot open file from path 'res://assets/models/StylizedNatureMegaKit/Textures/Leaves_GiantPine_C.png.import'. Cannot open file from path 'res://assets/models/StylizedNatureMegaKit/Textures/Leaves_TwistedTree_C.png.import'. Cannot open file from path 'res://assets/models/StylizedNatureMegaKit/Textures/Leaves_TwistedTree.png.import'. Cannot open file from path 'res://assets/models/StylizedNatureMegaKit/Textures/Leaves_NormalTree_C.png.import'. No loader found for resource: res://assets/models/StylizedNatureMegaKit/glTF/Leaves_TwistedTree_C.png (expected type: ) modules/gltf/gltf_document.cpp:3534 - Condition "err != OK" is true. modules/gltf/gltf_document.cpp:3777 - Index image = 1 is out of bounds (p_state->images.size() = 0). modules/gltf/gltf_document.cpp:3777 - Index image = 0 is out of bounds (p_state->images.size() = 0). modules/gltf/gltf_document.cpp:3777 - Index image = 2 is out of bounds (p_state->images.size() = 0). Cannot open file from path 'res://assets/models/StylizedNatureMegaKit/glTF/TwistedTree_4.gltf.import'. ``` The assets show up with a red cross in the filesystem explorer, you get `editor/editor_node.cpp:1278 - Condition "!res.is_valid()" is true. Returning: ERR_CANT_OPEN` when you try to double-click the file. The feedback suggests an issue with the assets, as if the files are corrupted or invalid. The first search results discuss [invalid formats](https://www.reddit.com/r/godot/comments/vgilxn/error_failed_to_open_png/). The solution is "just" to chmod 755 the directories. Expected: - The import process doesn't infinite loop, - The import process shows clear feedback like `"Permission error when creating the res://some-file.import"`. ### Steps to reproduce `cd assets; chmod 555 ./; touch ./some-file.jpg` go back to godot. Using the free tier of the asset pack above gives the infinite loop for some reason. ### Minimal reproduction project (MRP) [bug-report-permissions.zip](https://github.com/user-attachments/files/17529458/bug-report-permissions.zip)
bug,topic:editor,topic:import
low
Critical
2,615,637,327
flutter
Autocomplete options overlay should automatically moves up when soft keyboard is visible
### Steps to reproduce When soft keyboard is visible, unless options widget is constrained or set to a specific height, part of the options overlay will be hidden behind the keyboard. This is a dealbreaker especially in mobile, since we would prefer not to set a specific height value. ### Expected results https://github.com/user-attachments/assets/9d3bdf77-3474-48fc-9631-cea09470ed73 ### Actual results https://github.com/user-attachments/assets/ce09822d-c6a9-4681-a84a-2103cda1a31e ### Code sample ```dart import "package:flutter/material.dart"; /// Flutter code sample for [RawAutocomplete]. class AutocompleteExampleApp extends StatelessWidget { const AutocompleteExampleApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( resizeToAvoidBottomInset: false, appBar: AppBar( title: const Text("RawAutocomplete Basic"), ), body: const Center( child: AutocompleteBasicExample(), ), ), ); } } class AutocompleteBasicExample extends StatelessWidget { const AutocompleteBasicExample({super.key}); static const List<String> _options = <String>[ "aardvark", "bobcat", "chameleon", "chameleon", "chameleon", "chameleon", "chameleon", "chameleon", "chameleon", ]; @override Widget build(BuildContext context) { return RawAutocomplete<String>( optionsBuilder: (TextEditingValue textEditingValue) { return _options.where((String option) { return option.contains(textEditingValue.text.toLowerCase()); }); }, fieldViewBuilder: ( BuildContext context, TextEditingController textEditingController, FocusNode focusNode, VoidCallback onFieldSubmitted, ) { return TextFormField( controller: textEditingController, focusNode: focusNode, onFieldSubmitted: (String value) { onFieldSubmitted(); }, ); }, optionsViewBuilder: ( BuildContext context, AutocompleteOnSelected<String> onSelected, Iterable<String> options, ) { return Material( elevation: 4, child: ListView.builder( padding: const EdgeInsets.all(8), itemCount: options.length, itemBuilder: (BuildContext context, int index) { final option = options.elementAt(index); return GestureDetector( onTap: () { onSelected(option); }, child: ListTile( title: Text(option), ), ); }, ), ); }, ); } } ``` ### Screenshots or Video _No response_ ### Logs _No response_ ### Flutter Doctor output ``` [✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale en-ID) • Flutter version 3.24.3 on channel stable at /Users/dikatok/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 2663184aa7 (6 weeks ago), 2024-09-11 16:27:48 -0500 • Engine revision 36335019a8 • Dart version 3.5.3 • DevTools version 2.37.3 [✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /Users/dikatok/Library/Android/Sdk • Platform android-34, build-tools 34.0.0 • ANDROID_HOME = /Users/dikatok/Library/Android/Sdk • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314) • All Android licenses accepted. [!] Xcode - develop for iOS and macOS (Xcode 16.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 16A242d ✗ Unable to get list of installed Simulator runtimes. • CocoaPods version 1.14.3 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2024.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314) [✓] VS Code (version 1.94.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.99.20240930 [✓] Connected device (4 available) • 2201117TG (mobile) • cb0b36bc • android-arm64 • Android 13 (API 33) • macOS (desktop) • macos • darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0.1 24A348 darwin-arm64 • Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.70 [✓] Network resources • All expected network resources are available. ! Doctor found issues in 1 category. ```
a: text input,framework,f: material design,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27
low
Major
2,615,638,297
pytorch
The error compiling problem of Qt and libtorch(my solution to eliminate the bug)
When I built the code at first,it run well at the first ```cpp #include <LogLevel.hpp> #include <LogManager.hpp> #include <QtCharts/QChartView> #include <QtCharts/QLineSeries> #include <QtWidgets/QApplication> #include <iostream> #include <torch/torch.h> #include <torch/script.h> // 定义卷积 + ReLU + 批量归一化的模块 struct ConvReluBn : torch::nn::Module { ConvReluBn(int in_channels, int out_channels, int kernel_size, int stride = 1) { conv = register_module("conv", torch::nn::Conv2d( torch::nn::Conv2dOptions(in_channels, out_channels, kernel_size).stride(stride).padding(kernel_size / 2))); relu = register_module("relu", torch::nn::ReLU()); bn = register_module("bn", torch::nn::BatchNorm2d(out_channels)); } torch::Tensor forward(torch::Tensor x) { return bn->forward(relu->forward(conv->forward(x))); } private: torch::nn::Conv2d conv{nullptr}; torch::nn::ReLU relu{nullptr}; torch::nn::BatchNorm2d bn{nullptr}; }; // 定义 plainCNN 类 class plainCNN : public torch::nn::Module { public: plainCNN(int in_channels, int out_channels); torch::Tensor forward(torch::Tensor x); private: int mid_channels[3] = {32, 64, 128}; std::shared_ptr<ConvReluBn> conv1; std::shared_ptr<ConvReluBn> down1; std::shared_ptr<ConvReluBn> conv2; std::shared_ptr<ConvReluBn> down2; std::shared_ptr<ConvReluBn> conv3; std::shared_ptr<ConvReluBn> down3; torch::nn::Conv2d out_conv{nullptr}; }; // 构造函数的实现 plainCNN::plainCNN(int in_channels, int out_channels) { conv1 = std::make_shared<ConvReluBn>(in_channels, mid_channels[0], 3); down1 = std::make_shared<ConvReluBn>(mid_channels[0], mid_channels[0], 3, 2); conv2 = std::make_shared<ConvReluBn>(mid_channels[0], mid_channels[1], 3); down2 = std::make_shared<ConvReluBn>(mid_channels[1], mid_channels[1], 3, 2); conv3 = std::make_shared<ConvReluBn>(mid_channels[1], mid_channels[2], 3); down3 = std::make_shared<ConvReluBn>(mid_channels[2], mid_channels[2], 3, 2); out_conv = register_module("out_conv", torch::nn::Conv2d( torch::nn::Conv2dOptions(mid_channels[2], out_channels, 3).padding(1))); register_module("conv1", conv1); register_module("down1", down1); register_module("conv2", conv2); register_module("down2", down2); register_module("conv3", conv3); register_module("down3", down3); } // forward 方法的实现 torch::Tensor plainCNN::forward(torch::Tensor x) { x = conv1->forward(x); x = down1->forward(x); x = conv2->forward(x); x = down2->forward(x); x = conv3->forward(x); x = down3->forward(x); x = out_conv->forward(x); return x; } int main(int argc, char *argv[]) { // 设置文件日志记录器 cmx::logManager.setFileAppender("log.txt"); // 设置控制台日志记录器 cmx::logManager.setConsoleAppender(); // 检查 GPU 是否可用 torch::Device device(torch::kCUDA); if (!torch::cuda::is_available()) { cmx::logManager.log("CUDA is not available! Falling back to CPU.", cmx::Log::LogLevel::Warning); device = torch::Device(torch::kCPU); } else { cmx::logManager.log("CUDA is available! Using GPU for training.", cmx::Log::LogLevel::Info); } // 创建模型实例并移动到 GPU plainCNN model(3, 10); model.to(device); model.train(); // 定义优化器 torch::optim::SGD optimizer(model.parameters(), /*lr=*/0.01); // 数据存储 QLineSeries *series = new QLineSeries(); // 训练过程 size_t train_epochs = 1000; std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now(); for (size_t epoch = 0; epoch < train_epochs; epoch++) { // 创建示例输入并移动到 GPU torch::Tensor input = torch::randn({1, 3, 224, 224}).to(device); torch::Tensor target = torch::randn({1, 10, 28, 28}).to(device); // 清空梯度 optimizer.zero_grad(); // 前向传播 auto output = model.forward(input); // 计算损失 auto loss = torch::nn::functional::mse_loss(output, target); // 反向传播 loss.backward(); // 更新参数 optimizer.step(); cmx::logManager.log("Epoch [" + std::to_string(epoch + 1) + "/" + std::to_string(epoch + 1) + "], Loss: " + std::to_string(loss.item<float>()), cmx::Log::LogLevel::Info); // 添加损失到图表 series->append(epoch + 1, loss.item<float>()); } std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now(); std::chrono::duration<double> elapsed_seconds = end - start; cmx::logManager.log("Training completed in " + std::to_string(elapsed_seconds.count()) + " seconds.", cmx::Log::LogLevel::Info); cmx::logManager.log("Per epoch average time = " + std::to_string(elapsed_seconds.count() / train_epochs) + " seconds.", cmx::Log::LogLevel::Info); // Qt Charts 初始化 QApplication a(argc, argv); QChart *chart = new QChart(); chart->addSeries(series); chart->createDefaultAxes(); chart->setTitle("Training Loss Over Epochs"); QChartView *chartView = new QChartView(chart); chartView->setRenderHint(QPainter::Antialiasing); chartView->resize(800, 600); chartView->show(); return a.exec(); } ``` But when I include more Qt header files to my project ,an error occured ``` D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1567,35): error C2059: 语法错误:“<parameter-list>” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1567,44): error C2334: “{”的前面有意外标记;跳过明显的函数体 [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): error C2760: 语法错误: 此处出现意外的“(”;应为“ID 表达式” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): error C2760: 语法错误: 此处出现意外的“.”;应为“statement” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): message : 已跳过错误恢复:“.” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): error C2760: 语法错误: 此处出现意外的“)”;应为“;” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): error C3878: 语法错误:“expression_statement”后出现意外标记“)” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): error C3878: 语法错误:“statement”后出现意外标记“)” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): error C3878: 语法错误:“selection_statement”后出现意外标记“)” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1783,3): message : 已跳过错误恢复:“) ) )” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): error C2760: 语法错误: 此处出现意外的“(”;应为“ID 表达式” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): error C2760: 语法错误: 此处出现意外的“.”;应为“statement” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): message : 已跳过错误恢复:“.” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): error C2760: 语法错误: 此处出现意外的“)”;应为“;” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): error C3878: 语法错误:“expression_statement”后出现意外标记“)” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): error C3878: 语法错误:“statement”后出现意外标记“)” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): error C3878: 语法错误:“selection_statement”后出现意外标记“)” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] D:\MyLibs\libtorch-win-shared-with-deps-2.5.0+cu124\libtorch\include\ATen\core\ivalue_inl.h(1801,3): message : 已跳过错误恢复:“) ) )” [D:\study\libTorchProject\cmake-build-release-visual-studio\main.vcxproj] ``` After a long time of debug,I have discovered that the errors are caused by qt slots conflicting with libtorch structures with the same name.According to a similar answer from the stackoverflow. It's a nice way to deal with such problem by this ~~~cpp #include <LogLevel.hpp> #include <LogManager.hpp> #include <QtCharts/QChartView> #include <QtCharts/QLineSeries> #include <QtWidgets/QApplication> #include <iostream> #undef slots #include <torch/torch.h> #include <torch/script.h> #define slots Q_SLOTS ~~~ In that way,you can build your project successfully. cc @jbschlosser
module: dependency bug,module: cpp,triaged
low
Critical
2,615,648,863
yt-dlp
Issue with LUA and YT-DLP: yt-dlp sends "finished process" to LUA to early?
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm asking a question and **not** reporting a bug or requesting a feature - [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) ### Please make sure the question is worded well enough to be understood I needed that yt-dlp, called by a LUA script, when ends all the functions something else si done by the calling script; In the following example just print the word "END" for cimplicity. example of LUA code involving yt-dlp ``` var = "yt-dlp.exe --recode mp4 <URL>" os.execute(var) print("End") ``` What happens is that "End" is printed when yt-dlp finishes the download of the file. Not when the `--recode mp4 ` ends up it's "existence". It means something is not correctly transmitted to LUA code; Is it possible to impose to yt-dlp to send the END of the process only when all the arguments on the command line are definitely finished to be executed? ### Provide verbose output that clearly demonstrates the problem - [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output _No response_
external issue
low
Critical
2,615,657,601
next.js
useFormStatus() doesn't work with shadcn/ui Select component
### Link to the code that reproduces this issue https://github.com/samstr/shadcn-formstatus-demo ### To Reproduce 1. Clone https://github.com/samstr/shadcn-formstatus-demo 2. npm install 3. npm run dev 4. Go to http://localhost:3000/ 5. Click the submit button on the form with an Input (pending state works) 6. Click the submit button on the form with a Select (pending state does not work) ### Current vs. Expected behavior See the difference between a form using an `Input` vs `Select`. I'm not sure if this is a NextJS issue, a shadcn/ui issue, or an underlying react issue. I suspect it's actually a problem with the current React 19 RC. https://github.com/user-attachments/assets/340d07cd-5266-42c7-a169-90f761940027 ### Provide environment information ```bash System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020 Available memory (MB): 98304 Available CPU cores: 12 Binaries: Node: 18.18.0 npm: 9.8.1 Yarn: N/A pnpm: 7.22.0 Relevant Packages: next: 15.0.1 // Latest available version is detected (15.0.1). eslint-config-next: 15.0.1 react: 19.0.0-rc-69d4b800-20241021 react-dom: 19.0.0-rc-69d4b800-20241021 typescript: 5.6.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Not sure ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context The issue started after upgrading to NextJS 15.0.0
bug
low
Minor
2,615,658,550
react
[React 19] useFormStatus() pending doesn't work with Select component
## Summary I recently upgraded to NextJS 15 and I have noticed that react-dom's useFormStatus is no longer working when the Select component is included in the form. I don't know if this is a shadcn/ui issue, a NextJS issue, or an underlying react-dom issue. My suspicion is it's a problem with how react-dom is handling forms that contain elements that have their own javascript involved (the shadcn/ui `Input` works fine, but the `Select` and `InputOTP` components don't. Those components have their own javascript involved (the Select handles popovers and the InputOTP handles keypresses to move to the next input etc). Please see the repository where I have a trimmed down form that reproduces the problem. https://github.com/samstr/shadcn-formstatus-demo https://github.com/user-attachments/assets/fd493ed3-2b02-4fc5-b5a9-acb2e9c618a0 I've also reported this to the following repos in order to track down the source of the problem. * https://github.com/shadcn-ui/ui/issues/5574 * https://github.com/vercel/next.js/issues/71895
React 19
low
Major
2,615,684,644
tailwindcss
[v4] Tailwind CSS classes not updating in output when adding previously unused classes - requires server restart
**What version of Tailwind CSS are you using?** ``` "@tailwindcss/vite": "^4.0.0-alpha.30", "tailwindcss": "^4.0.0-alpha.30" ``` **What build tool (or framework if it abstracts the build tool) are you using?** ``` "@solidjs/start": "^1.0.9", "vinxi": "^0.4.3" ``` **What version of Node.js are you using?** ``` "node": ">=18" ``` **What browser are you using?** Firefox **What operating system are you using?** macOS **Reproduction URL** Since SolidStart isn't currently working on Stackblitz, please test locally: 1. Clone the repository: `git clone https://github.com/binajmen/bun-solidstart-tw4` 2. Install dependencies and start the dev server: `bun install && bun dev` 3. You should see a colored background on the index page 4. Try changing the background color class in `app.tsx` (e.g., to bg-blue-400) and save the file 5. The background will turn white instead of updating to the new color 6. Only after manually restarting bun dev will the new background color appear **Describe your issue** When adding Tailwind classes that weren't previously used anywhere in the codebase, these new classes aren't being included in the generated CSS file. The changes only take effect after manually restarting the development server.
v4,vite
low
Major
2,615,699,751
TypeScript
Inline conditional JSX props spread warns as if unconditional
### 🔎 Search Terms "is specified more than once, so this usage will be overwritten", "jsx spread" ### 🕗 Version & Regression Information I observed this in practice on 5.6.3, and it seems to occur in all the versions I tested with the playground. ### ⏯ Playground Link https://www.typescriptlang.org/play/?ts=5.6.3#code/JYWwDg9gTgLgBAJQKYEMDG8BmUIjgcilQ3wG4AoczAVwDsNgJa4BhXSWpWmACjBzABnAFxwUtAJ4BKOAG9ycOERjUozADwATYADcAfOoD02-RQC+lGvRiNmg3EgBidNHwEixkmfMXLVzAAZzSiQAD0hYOCsGJjgYJEFeQQALCGoAG00AeR0kKChgTSRRACMICHTUWm8FODQmRLh7ECQANRR0uABeOB5axXU2cCYuGH7FMS7ZTA7BJAsJiZKpmfS5szhDQzgAIgAVZOBBJv5UTTF0gHcUCWOIXKhLgvjjmEPj-ggwPJgJADodjtxoo0CtZvNgXI-tCeCk0pkcnkCkVIYoAPxyVGLOAlUQwKDUJAAGixigW2MUolkZik5MUhj0tSkFFq9VojQgbzy7U6PR8ExQolWcxJi1xUXBRM221oEDgSOg4zQQsl42hf1hqQy2QeyKQkIx-IpinF+MJouNdImVJpFrMLJBDXg9yRhSQ3SaWoRurd40NWNNBOJkKtNoddSdcUOUE0PI9fUWg3YI24kJQYLWEIpy2m4I2Wzgsvl+UVFNBuczVsUsnVLvybqtDKZ4bZjUwaVgyTjPWaThcPCNYhVmYtJuHIulhblCqgSvHwcW6s18J1rpRFP9xqWeKDo+xVbgNqkduZ5DMQA ### 💻 Code ```ts import React from 'react'; function Component(props: any) { return <div></div>; } function someFunc(props: any) { return 0; } export function test(shouldOverride: boolean) { const someVal = ( <Component a={false} b={false} // "This spread always overwrites this property."" c={false} {...(shouldOverride ? { b: true, } : {})} /> ); const otherVal = { a: false, b: false, // no error c: false, ...(shouldOverride ? { b: true, } : {}), }; const override = shouldOverride ? { b: true, } : {}; const thirdVal = ( <Component a={false} b={false} // no error c={false} {...override} /> ); const fourthVal = someFunc({ a: false, b: false, // no error c: false, ...(shouldOverride ? { b: true, } : {}), }); } ``` ### 🙁 Actual behavior Only in the case when the spread is inline and in the props of a JSX component, spreading an object that conditionally may or may not have a property warns as if the property was unconditionally set, with `'b' is specified more than once, so this usage will be overwritten. (2783)` / `This spread always overwrites this property.` ### 🙂 Expected behavior As in the other test cases (spread into an object that's not into component props/spread of a value that's not defined inline into component props), the inferred type of the overriding `b` should be optional and there shouldn't be a warning. ### Additional information about the issue No idea what to title this to make it more clear, sorry. Also, I see a fair number of other issues involving inline spread type inference being weird, but none seemed to specifically point to JSX and it's not clear to me how to narrow this one down further.
Bug
low
Critical
2,615,732,539
tauri
[bug] Tauri v2 is missing Zoom predefined menu item (MacOS)
### Describe the bug V1 has this https://docs.rs/tauri/1.8.1/tauri/enum.MenuItem.html#variant.Zoom Nowhere to be seen in v2 https://docs.rs/tauri/2.0.6/tauri/menu/struct.PredefinedMenuItem.html ### Reproduction N/A ### Expected behavior API should have access to this native macos menu item. ### Full `tauri info` output ```text Tauri 2.0.6 on macos ``` ### Stack trace _No response_ ### Additional context _No response_
type: bug,platform: macOS,status: needs triage
low
Critical
2,615,748,741
react
[Compiler]: All comments removed from source code
### What kind of issue is this? - [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization) - [X] babel-plugin-react-compiler (build issue installing or using the Babel plugin) - [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin) - [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script) ### Link to repro https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgBMEAzAQygBsCSoA7OXASwjvwFkBPAQU0wAoAlPmAAdNvhgJcsNgB4AfOPwr8cgMIQAtloR1cYblIBKCLRABuCQvgD0SiSo3bd+w1IAKUsAhhWb9spq9gDc4gC+4uK0DMys+Jo6egZGCKbm-kIiQXYAVPi4MGR0YJRkuDhgyPgApFX4UD4wdGS6AHT4ACoAFkxg+HAuyfgA7kyUlPgARgiSZpbW+Lm2QYMlBLgYBAC8+AD6e-wA5AASCBMQtWBHgmF0Qba2Xb39g0n6o+OTZJSQ07NSDLWNrKIJSGTNNSEJgWBTATboXDhOS2aGwu6Re50GKMFhsRKuFKeby+TLCMQSUQgRqzOiXXTmKl3FQPfKFYqlcqVap1ao05qtBAdHp9AZDD5jCb4OkEGZzIE2ZarVhgDZbfC7A7HM4XK43O45R7PUVvQmfKU-P5ygDWCEwuBBlIk4NkUJhcIRSJRaIUGJA4SAA ### Repro steps I setup my Babel & webpack config to preserve comments, specifically comments for translators. In systems like WordPress, i18n strings & comments are directly in the source code and must be preserved. When React Compiler optimizes a component, all comments will be stripped. When React Compiler is disabled, e.g. with `use no memo` or when it encounters a disabled lint rule, then the component won't be touched at all and comments are preserved as expected. Unfortunately the Playground link doesn't show this, as it seems to strip all comments no matter what. ### How often does this bug happen? Every time ### What version of React are you using? 17 ### What version of React Compiler are you using? 0.0.0-experimental-34d04b6-20241024
Type: Question,Component: Optimizing Compiler
low
Critical
2,615,749,612
terminal
Tab bar RMB glitches
### Windows Terminal version 1.21.2911.0 ### Windows build number 10.0.26100.0 ### Other Software _No response_ ### Steps to reproduce Open Windows Terminal. Do Right Mouse Button click multiple times on the tab, shifting a cursor along tab header area. Note that sometimes you get Windows titlebar standard context menu (like on Alt+Space) instead of context menu of the current tab with completely different set of actions and design. Also note, that in Dark Mode this system titlebar context menu is always white, while some Windows apps do change system menus to dark mode too (like VSCode, Notepad++, HWiNFO64). ### Expected Behavior While doing right mouse button click on the tab header area, correct type of context menu must be shown. ### Actual Behavior While doing right mouse button click on the tab header area, about 1 in 5 clicks shows Windows titlebar standard context menu (like on Alt+Space) instead of context menu of the tab clicked. ![Image](https://github.com/user-attachments/assets/0bbbb140-1866-4b43-8df6-fdfb0f495b61) ![Image](https://github.com/user-attachments/assets/a7d94dfe-a84d-4611-8a05-9080c34c0f95)
Issue-Bug,Area-UserInterface,Product-Terminal
low
Minor
2,615,765,462
angular
Proposal: Native Component for Dynamic Metadata in Angular – Essential for SSR in SEO context
Introduce a component in Angular that functions similarly to the [`<Head>`](https://nextjs.org/docs/pages/api-reference/components/head) component in Next.js, allowing developers to manage metadata (title, meta tags, etc.) for individual pages or components directly within the Angular template. ### Why? As Angular continues to evolve towards server-side rendering, the current `Title` and `Meta` services feel outdated and insufficient for modern applications. These services offer limited flexibility, especially for SSR-focused applications where managing tags like `<link>` or dynamic metadata updates is crucial. Currently, <ins>**there is no clean way to create `<link>` tags in SSR context**</ins> that are hydrated, defined at the component level, and integrated with Angular’s lifecycle. A new, declarative component e.g. `<ng-head>` would enhance the developer experience by providing a more intuitive and comprehensive way to handle metadata and head elements in an Angular project. While it is possible to create a custom library to address this problem, it feels like this functionality should be available out of the box in Angular. Features like adding `<link>` tags to the header or automatically handling title and meta tags (adding and removing them when a component is created or destroyed) should not require writing a custom service to manage. Having these capabilities natively would greatly enhance Angular’s usability in SSR contexts and make metadata management simpler and more consistent. ### Proposed solution ```html <ng-head> <title>Example</title> <meta name="description" [attr.content]="example$ | async" /> <!-- with observables --> <link rel="canonical" [attr.href]="example()" /> <!-- or with signals --> </ng-head> ``` ### Alternatives considered While a new service for head management supporting [all elements](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/head#see_also) could be an option, I personally don’t find this approach ideal. ### Current approaches Currently, I'm aware of two indirect approaches to achieve this, both of which feel very developer-unfriendly: 1. **Creating a Service**: A custom service can be created that listens to `RouterEvents`, loading metadata from the `data` property on the route and handling automatic removal. This approach requires custom event handling and can become complex to maintain. 2. **Managing Metadata in Each Component**: Another method is to add and remove meta tags directly within each component, using `ngOnInit` to add tags and `ngOnDestroy` to remove them. This requires repetitive code in each component and increases the risk of metadata inconsistencies.
area: server
low
Minor
2,615,825,538
rust
Tracking Issue for rustc's translatable diagnostics infrastructure
This is a tracking issue for rustc's translatable diagnostics infrastructure, which is related to #100717 but reflects the current status. ## Current status Unfortunately, we have found that the current implementation of diagnostic translation infrastructure causes non-insignificant friction for compiler contributors when trying to work on diagnostics, including but not limited to: - Having to edit multiple files (fluent file, `errors.rs` and the emission site, etc.) - The diagnostics derive DSL is quite complex and exhibits some quirks - Fluent DSL also has its own quirks - Sometimes not sufficiently flexible to accommodate diagnostic needs, e.g. see `rustc_const_eval` or other not-migrated examples. > [!IMPORTANT] > Based on these friction points, we want to downgrade the internal lints `untranslatable_diagnostic`/`diagnostic_outside_of_impl` requiring usage of current translatable diagnostic infra from `deny` to `allow`. If someone wants to continue the translatable diagnostics effort, then they will need to come up with a better redesign that causes less friction for compiler contributors. ### Related discussions - https://github.com/rust-lang/rust/issues/112618 - https://github.com/rust-lang/rust/issues/121077 ## Implementation steps ### Relaxing the current restrictions - [x] Downgrade `untranslatable_diagnostic` and `diagnostic_outside_of_impl` from `deny` to `allow`. - https://github.com/rust-lang/rust/pull/132182 - [x] Update rustc-dev-guide to reflect current status of translatable diagnostics infrastructure. - https://github.com/rust-lang/rustc-dev-guide/pull/2105 ### Come up with a redesign Note: this is not currently being *actively* worked on AFAICT, please speak with wg-diagnostics and T-compiler if you wish to pursue this. See specifically <https://rust-lang.zulipchat.com/#narrow/channel/336883-i18n/topic/.23100717.20diag.20translation/near/472701978>. - A redesign of the translatable diagnostics infra will need to address the needs of both *compiler contributors* and *translation teams*. In particular, it cannot cause significant burden or friction for compiler contributors. Further steps are presently unclear. ## Unresolved questions - What do we do with the current diagnostic translation infrastructure? - It's a lot of work and churn to rip it out, as well. - What about the Pontoon infrastructure? - What about translation teams? ## Implementation / experimentation history This listing is moreso focused on diagnostic infra *itself*, not migration efforts. Please see the closed PRs for what concrete issues they have ran into. - https://github.com/rust-lang/rust/pull/121334 - https://github.com/rust-lang/rust/pull/117867 - https://github.com/rust-lang/rust/pull/125208 ## More discussions - https://rust-lang.zulipchat.com/#narrow/channel/336883-i18n/topic/.23100717.20diag.20translation/near/472701978 - https://github.com/rust-lang/rust/pull/117867#issuecomment-2262552506 - https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Localization.20infra.20interferes.20with.20grepping.20for.20error --- cc @rust-lang/wg-diagnostics
E-hard,A-diagnostics,T-compiler,C-tracking-issue,WG-diagnostics,A-translation,S-tracking-needs-deep-research,D-diagnostic-infra,E-needs-design
low
Critical
2,615,841,786
pytorch
Error loading ".venv\Lib\site-packages\torch\lib\c10_xpu.dll" or one of its dependencies
### 🐛 Describe the bug When attempting to import `torch` with version `2.5.1+xpu`, I encounter the following error: ``` Error loading ".venv\Lib\site-packages\torch\lib\c10_xpu.dll" or one of its dependencies ``` I've followed the instructions from: - https://pytorch.org/docs/stable/notes/get_start_xpu.html - https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html Installed and set up the oneAPI environment: ``` :: initializing oneAPI environment... Initializing Visual Studio command-line environment... Visual Studio version 17.11.5 environment configured. "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\" Visual Studio command-line environment initialized for: 'x64' :: compiler -- processing etc\compiler\vars.bat :: debugger -- processing etc\debugger\vars.bat :: dpl -- processing etc\dpl\vars.bat :: mkl -- processing etc\mkl\vars.bat :: tbb -- processing etc\tbb\vars.bat :: oneAPI environment initialized :: ``` I have additional questions for the Intel Teams. What is the recommended way to use an Intel GPU on Windows 11? - IPEX `2.3.1+xpu` on WSL2 - `torch.compile` with OpenVINO backend - `torch 2.5` for support xpu Thank you for your assistance. ### Versions ``` PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 家用版 (10.0.22631 64 位元) GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.12.6 (tags/v3.12.6:a4a2d2b, Sep 6 2024, 20:11:23) [MSC v.1940 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-11-10.0.22631-SP0 Is CUDA available: N/A CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Name: Intel(R) Core(TM) Ultra 9 185H Manufacturer: GenuineIntel Family: 1 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 2300 MaxClockSpeed: 2300 L2CacheSize: 18432 L2CacheSpeed: None Revision: None Versions of relevant libraries: [pip3] numpy==1.26.3 [pip3] onnx==1.17.0 [pip3] torch==2.5.1+xpu [conda] Could not collect ``` cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
module: windows,triaged,module: xpu
medium
Critical
2,615,847,928
langchain
I Can't use HuggingFaceEmbeddings and HuggingFaceBgeEmbeddings
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain_huggingface import HuggingFaceEmbeddings embeddings=HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") ``` ### Error Message and Stack Trace (if applicable) ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1764, in _LazyModule._get_module(self, module_name) 1763 try: -> 1764 return importlib.import_module("." + module_name, self.__name__) 1765 except Exception as e: File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package) 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:688, in _load_unlocked(spec) File <frozen importlib._bootstrap_external>:883, in exec_module(self, module) File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds) File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:112 111 if is_accelerate_available(): --> 112 from accelerate.hooks import AlignDevicesHook, add_hook_to_module 115 @dataclass 116 class GenerateDecoderOnlyOutput(ModelOutput): File /opt/conda/lib/python3.10/site-packages/accelerate/__init__.py:16 14 __version__ = "0.34.2" ---> 16 from .accelerator import Accelerator 17 from .big_modeling import ( 18 cpu_offload, 19 cpu_offload_with_hook, (...) 24 load_checkpoint_and_dispatch, 25 ) File /opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py:36 34 from huggingface_hub import split_torch_state_dict_into_shards ---> 36 from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state 37 from .data_loader import DataLoaderDispatcher, prepare_data_loader, skip_first_batches File /opt/conda/lib/python3.10/site-packages/accelerate/checkpointing.py:24 22 from torch.cuda.amp import GradScaler ---> 24 from .utils import ( 25 MODEL_NAME, 26 OPTIMIZER_NAME, 27 RNG_STATE_NAME, 28 SAFE_MODEL_NAME, 29 SAFE_WEIGHTS_NAME, 30 SAMPLER_NAME, 31 SCALER_NAME, 32 SCHEDULER_NAME, 33 WEIGHTS_NAME, 34 get_pretty_name, 35 is_mlu_available, 36 is_torch_xla_available, 37 is_xpu_available, 38 save, 39 ) 42 if is_torch_xla_available(): File /opt/conda/lib/python3.10/site-packages/accelerate/utils/__init__.py:193 184 from .deepspeed import ( 185 DeepSpeedEngineWrapper, 186 DeepSpeedOptimizerWrapper, (...) 190 HfDeepSpeedConfig, 191 ) --> 193 from .bnb import has_4bit_bnb_layers, load_and_quantize_model 194 from .fsdp_utils import ( 195 disable_fsdp_ram_efficient_loading, 196 enable_fsdp_ram_efficient_loading, (...) 201 save_fsdp_optimizer, 202 ) File /opt/conda/lib/python3.10/site-packages/accelerate/utils/bnb.py:29 24 from accelerate.utils.imports import ( 25 is_4bit_bnb_available, 26 is_8bit_bnb_available, 27 ) ---> 29 from ..big_modeling import dispatch_model, init_empty_weights 30 from .dataclasses import BnbQuantizationConfig File /opt/conda/lib/python3.10/site-packages/accelerate/big_modeling.py:24 22 import torch.nn as nn ---> 24 from .hooks import ( 25 AlignDevicesHook, 26 CpuOffload, 27 UserCpuOffloadHook, 28 add_hook_to_module, 29 attach_align_device_hook, 30 attach_align_device_hook_on_blocks, 31 ) 32 from .utils import ( 33 OffloadedWeightsLoader, 34 check_cuda_p2p_ib_support, (...) 48 retie_parameters, 49 ) File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:31 30 from .utils.modeling import get_non_persistent_buffers ---> 31 from .utils.other import recursive_getattr 34 _accelerate_added_attributes = ["to", "cuda", "npu", "xpu", "mlu", "musa"] File /opt/conda/lib/python3.10/site-packages/accelerate/utils/other.py:29 27 from safetensors.torch import save_file as safe_save_file ---> 29 from ..commands.config.default import write_basic_config # noqa: F401 30 from ..logging import get_logger File /opt/conda/lib/python3.10/site-packages/accelerate/commands/config/__init__.py:19 17 import argparse ---> 19 from .config import config_command_parser 20 from .config_args import default_config_file, load_config_from_file # noqa: F401 File /opt/conda/lib/python3.10/site-packages/accelerate/commands/config/config.py:25 24 from .config_utils import _ask_field, _ask_options, _convert_compute_environment # noqa: F401 ---> 25 from .sagemaker import get_sagemaker_input 28 description = "Launches a series of prompts to create and save a `default_config.yaml` configuration file for your training system. Should always be ran first on your machine" File /opt/conda/lib/python3.10/site-packages/accelerate/commands/config/sagemaker.py:35 34 if is_boto3_available(): ---> 35 import boto3 # noqa: F401 38 def _create_iam_role_for_sagemaker(role_name): File /opt/conda/lib/python3.10/site-packages/boto3/__init__.py:17 16 from boto3.compat import _warn_deprecated_python ---> 17 from boto3.session import Session 19 __author__ = 'Amazon Web Services' File /opt/conda/lib/python3.10/site-packages/boto3/session.py:17 15 import os ---> 17 import botocore.session 18 from botocore.client import Config File /opt/conda/lib/python3.10/site-packages/botocore/session.py:26 24 import warnings ---> 26 import botocore.client 27 import botocore.configloader File /opt/conda/lib/python3.10/site-packages/botocore/client.py:15 13 import logging ---> 15 from botocore import waiter, xform_name 16 from botocore.args import ClientArgsCreator File /opt/conda/lib/python3.10/site-packages/botocore/waiter.py:18 16 import jmespath ---> 18 from botocore.docs.docstring import WaiterDocstring 19 from botocore.utils import get_service_module_name File /opt/conda/lib/python3.10/site-packages/botocore/docs/__init__.py:15 13 import os ---> 15 from botocore.docs.service import ServiceDocumenter 17 DEPRECATED_SERVICE_NAMES = {'sms-voice'} File /opt/conda/lib/python3.10/site-packages/botocore/docs/service.py:14 13 from botocore.docs.bcdoc.restdoc import DocumentStructure ---> 14 from botocore.docs.client import ( 15 ClientContextParamsDocumenter, 16 ClientDocumenter, 17 ClientExceptionsDocumenter, 18 ) 19 from botocore.docs.paginator import PaginatorDocumenter File /opt/conda/lib/python3.10/site-packages/botocore/docs/client.py:18 17 from botocore.docs.bcdoc.restdoc import DocumentStructure ---> 18 from botocore.docs.example import ResponseExampleDocumenter 19 from botocore.docs.method import ( 20 document_custom_method, 21 document_model_driven_method, 22 get_instance_public_methods, 23 ) File /opt/conda/lib/python3.10/site-packages/botocore/docs/example.py:13 1 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"). You (...) 11 # ANY KIND, either express or implied. See the License for the specific 12 # language governing permissions and limitations under the License. ---> 13 from botocore.docs.shape import ShapeDocumenter 14 from botocore.docs.utils import py_default File /opt/conda/lib/python3.10/site-packages/botocore/docs/shape.py:19 1 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"). You (...) 17 # inherited from a Documenter class with the appropriate methods 18 # and attributes. ---> 19 from botocore.utils import is_json_value_header 22 class ShapeDocumenter: File /opt/conda/lib/python3.10/site-packages/botocore/utils.py:39 38 import botocore.awsrequest ---> 39 import botocore.httpsession 41 # IP Regexes retained for backwards compatibility File /opt/conda/lib/python3.10/site-packages/botocore/httpsession.py:45 44 # Always import the original SSLContext, even if it has been patched ---> 45 from urllib3.contrib.pyopenssl import ( 46 orig_util_SSLContext as SSLContext, 47 ) 48 except ImportError: File /opt/conda/lib/python3.10/site-packages/urllib3/contrib/pyopenssl.py:43 41 from __future__ import annotations ---> 43 import OpenSSL.SSL # type: ignore[import-untyped] 44 from cryptography import x509 File /opt/conda/lib/python3.10/site-packages/OpenSSL/__init__.py:8 4 """ 5 pyOpenSSL - A simple wrapper around the OpenSSL library 6 """ ----> 8 from OpenSSL import SSL, crypto 9 from OpenSSL.version import ( 10 __author__, 11 __copyright__, (...) 17 __version__, 18 ) File /opt/conda/lib/python3.10/site-packages/OpenSSL/SSL.py:35 32 from OpenSSL._util import ( 33 text_to_bytes_and_warn as _text_to_bytes_and_warn, 34 ) ---> 35 from OpenSSL.crypto import ( 36 FILETYPE_PEM, 37 X509, 38 PKey, 39 X509Name, 40 X509Store, 41 _EllipticCurve, 42 _PassphraseHelper, 43 ) 45 __all__ = [ 46 "OPENSSL_VERSION_NUMBER", 47 "SSLEAY_VERSION", (...) 143 "X509VerificationCodes", 144 ] File /opt/conda/lib/python3.10/site-packages/OpenSSL/crypto.py:22 8 from typing import ( 9 Any, 10 Callable, (...) 19 Union, 20 ) ---> 22 from cryptography import utils, x509 23 from cryptography.hazmat.primitives.asymmetric import ( 24 dsa, 25 ec, (...) 28 rsa, 29 ) File /opt/conda/lib/python3.10/site-packages/cryptography/x509/__init__.py:7 5 from __future__ import annotations ----> 7 from cryptography.x509 import certificate_transparency, verification 8 from cryptography.x509.base import ( 9 Attribute, 10 AttributeNotFound, (...) 29 random_serial_number, 30 ) File /opt/conda/lib/python3.10/site-packages/cryptography/x509/verification.py:24 23 Subject = typing.Union[DNSName, IPAddress] ---> 24 VerifiedClient = rust_x509.VerifiedClient 25 ClientVerifier = rust_x509.ClientVerifier AttributeError: module 'x509' has no attribute 'VerifiedClient' The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1764, in _LazyModule._get_module(self, module_name) 1763 try: -> 1764 return importlib.import_module("." + module_name, self.__name__) 1765 except Exception as e: File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package) 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:688, in _load_unlocked(spec) File <frozen importlib._bootstrap_external>:883, in exec_module(self, module) File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds) File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/modeling_auto.py:21 20 from ...utils import logging ---> 21 from .auto_factory import ( 22 _BaseAutoBackboneClass, 23 _BaseAutoModelClass, 24 _LazyAutoMapping, 25 auto_class_update, 26 ) 27 from .configuration_auto import CONFIG_MAPPING_NAMES File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:40 39 if is_torch_available(): ---> 40 from ...generation import GenerationMixin 43 logger = logging.get_logger(__name__) File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive) File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1754, in _LazyModule.__getattr__(self, name) 1753 elif name in self._class_to_module.keys(): -> 1754 module = self._get_module(self._class_to_module[name]) 1755 value = getattr(module, name) File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1766, in _LazyModule._get_module(self, module_name) 1765 except Exception as e: -> 1766 raise RuntimeError( 1767 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1768 f" traceback):\n{e}" 1769 ) from e RuntimeError: Failed to import transformers.generation.utils because of the following error (look up to see its traceback): module 'x509' has no attribute 'VerifiedClient' The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[22], line 3 1 from langchain_huggingface import HuggingFaceEmbeddings ----> 3 embeddings=HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") File /opt/conda/lib/python3.10/site-packages/langchain_huggingface/embeddings/huggingface.py:53, in HuggingFaceEmbeddings.__init__(self, **kwargs) 51 super().__init__(**kwargs) 52 try: ---> 53 import sentence_transformers # type: ignore[import] 54 except ImportError as exc: 55 raise ImportError( 56 "Could not import sentence_transformers python package. " 57 "Please install it with `pip install sentence-transformers`." 58 ) from exc File /opt/conda/lib/python3.10/site-packages/sentence_transformers/__init__.py:10 7 import os 9 from sentence_transformers.backend import export_dynamic_quantized_onnx_model, export_optimized_onnx_model ---> 10 from sentence_transformers.cross_encoder.CrossEncoder import CrossEncoder 11 from sentence_transformers.datasets import ParallelSentencesDataset, SentencesDataset 12 from sentence_transformers.LoggingHandler import LoggingHandler File /opt/conda/lib/python3.10/site-packages/sentence_transformers/cross_encoder/__init__.py:3 1 from __future__ import annotations ----> 3 from .CrossEncoder import CrossEncoder 5 __all__ = ["CrossEncoder"] File /opt/conda/lib/python3.10/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:14 12 from torch.utils.data import DataLoader 13 from tqdm.autonotebook import tqdm, trange ---> 14 from transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, is_torch_npu_available 15 from transformers.tokenization_utils_base import BatchEncoding 16 from transformers.utils import PushToHubMixin File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive) File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1755, in _LazyModule.__getattr__(self, name) 1753 elif name in self._class_to_module.keys(): 1754 module = self._get_module(self._class_to_module[name]) -> 1755 value = getattr(module, name) 1756 else: 1757 raise AttributeError(f"module {self.__name__} has no attribute {name}") File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1754, in _LazyModule.__getattr__(self, name) 1752 value = Placeholder 1753 elif name in self._class_to_module.keys(): -> 1754 module = self._get_module(self._class_to_module[name]) 1755 value = getattr(module, name) 1756 else: File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1766, in _LazyModule._get_module(self, module_name) 1764 return importlib.import_module("." + module_name, self.__name__) 1765 except Exception as e: -> 1766 raise RuntimeError( 1767 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1768 f" traceback):\n{e}" 1769 ) from e RuntimeError: Failed to import transformers.models.auto.modeling_auto because of the following error (look up to see its traceback): Failed to import transformers.generation.utils because of the following error (look up to see its traceback): module 'x509' has no attribute 'VerifiedClient' ``` ### Description I was trying to use HuggingFaceEmbeddings and HuggingFaceBgeEmbeddings but I am getting the error mentioned above ### System Info ``` System Information ------------------ > OS: Linux > OS Version: #1 SMP Thu Jun 27 20:43:36 UTC 2024 > Python Version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] Package Information ------------------- > langchain_core: 0.3.13 > langchain: 0.3.4 > langchain_community: 0.3.3 > langsmith: 0.1.137 > langchain_chroma: 0.1.4 > langchain_groq: 0.2.0 > langchain_huggingface: 0.1.0 > langchain_openai: 0.2.3 > langchain_text_splitters: 0.3.0 Optional packages not installed ------------------------------- > langgraph > langserve Other Dependencies ------------------ > aiohttp: 3.9.5 > async-timeout: 4.0.3 > chromadb: 0.5.15 > dataclasses-json: 0.6.7 > fastapi: 0.111.0 > groq: 0.11.0 > httpx: 0.27.0 > huggingface-hub: 0.25.1 > jsonpatch: 1.33 > numpy: 1.26.4 > openai: 1.52.2 > orjson: 3.10.4 > packaging: 24.1 > pydantic: 2.9.2 > pydantic-settings: 2.6.0 > PyYAML: 6.0.2 > requests: 2.32.3 > requests-toolbelt: 1.0.0 > sentence-transformers: 3.2.1 > SQLAlchemy: 2.0.30 > tenacity: 8.3.0 > tiktoken: 0.8.0 > tokenizers: 0.20.0 > transformers: 4.45.1 > typing-extensions: 4.12.2 ```
stale
low
Critical
2,615,852,275
TypeScript
Regular Expression - improve on-hover notation for literal expressions
### 🔍 Search Terms regex, regular expression, syntax highlighting ### ✅ Viability Checklist - [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code - [x] This wouldn't change the runtime behavior of existing JavaScript code - [x] This could be implemented without emitting different JS based on the types of the expressions - [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.) - [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types - [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals ### ⭐ Suggestion I would like to be able to hover my mouse over a variable holding a regular expression and see the `literal string` or `/expression/` that was used in its construction (if known). ### 📃 Motivating Example This feature adds the ability to view the regular expression that went into a variable, meaning that you can abstract regular expressions away into separate files without losing the meaning behind what the regular expression represents. This is a quality of life improvement that does not solve the million problems that most regular expression examples look to solve, meaning that it won't parse or try to understand the context of the `RegExp`, but it will no longer disadvantage users who hoist their expressions rather than inline them. ### 💻 Use Cases 1. What do you want to use this for? - For projects where regular expressions would be better served by being hoisted to a different file (like a shared module) 2. What shortcomings exist with current approaches? - Current approaches require augmenting every regular expression with detailed comments, which is a lot of unnecessary boilerplate (which is more aligned with JSDoc rather than TypeScript) and can get messy when doing things like renaming capture groups and needing to update the comments accordingly. 3. What workarounds are you using in the meantime? - I'll show you what I've done, which works well for string-literal based regular expressions, but requires converting ALL `/forward slash/` regular expressions to be manually refactored to their string literal equivalents, which is very time consuming and shouldn't be necessary for the end user. Here's what I've developed to solve my own problem, but that still requires not using `/notation/`: ```ts /** * A regular expression that remembers the string literals used in its creation. */ export interface TypedRegExp<S extends string, F extends string> extends RegExp { source: S; flags: F; } export const createTypedRegExpr = <T extends string, M extends string = ''>( stringExpr: T, flags?: M ) => new RegExp(stringExpr, flags) as TypedRegExp<T, M>; ```
Suggestion,Awaiting More Feedback
low
Minor
2,615,869,637
react
[React 19] Unnecessary and unremovable `data-precedence` and `data-href` attributes on hosted `<style>` and `<script>`.
## Summary https://codesandbox.io/p/devbox/28rwmr ```tsx import { Writable } from "node:stream"; import React, { StrictMode } from "react"; import { renderToPipeableStream } from "react-dom/server"; async function renderJsx(children: React.ReactNode): Promise<string> { return new Promise((resolve, reject) => { let { pipe } = renderToPipeableStream(<StrictMode>{children}</StrictMode>, { onAllReady: (err: unknown) => { if (err) { console.error("onAllReady", err); reject(err); return; } pipe( new (class extends Writable { #chunks: any[] = []; _writev(chunks: any, callback): void { this.#chunks.push(...chunks.map((c: any) => c.chunk)); callback(); } _final(callback: (error?: Error | null) => void): void { callback(); resolve(new Blob(this.#chunks).text()); } })() ); }, onShellError: reject, }); }); } renderJsx( <StrictMode> <html> <body> <style href="foo" precedence="medium">{` .my-p { color: purple; } `}</style> </body> </html> </StrictMode> ).then(console.log, console.error); ``` Output: ```html <!DOCTYPE html><html><head><style data-precedence="medium" data-href="foo"> .my-p { color: purple; } </style></head><body></body></html> ``` As expected this moved the style to the head but it has added unnecessary `data-precedence` and `data-href` attributes. I don't know if these are required for rehydration but if so they should be optional for static components or static output. Version: `19.0.0-rc-b8ae38f8-20241018`
Resolution: Stale,React 19
medium
Critical
2,615,872,820
vscode
debug: Indicate column of stack frame in editor
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> If you are stopped at a breakpoint, VS-Code indicates the column at which you are stopped within the text editor using a small, yellow pointer inlined into the text, as shown in this screenshot: ![Image](https://github.com/user-attachments/assets/3af7fc60-e116-487e-a4f8-e57cb582b955) When selecting one of the stack frames further up in the call stack, the corresponding line is highlighted using a green pointer in the sidebar. However, it does not indicate the column. See this screenshot: ![Image](https://github.com/user-attachments/assets/3f1e271f-f3e8-476b-947c-03ec3216afd7) It would be great if VS-Code would **show the column pointer within the text area not only for the breakpoint itself, but also for stack frames further up the stack.**
feature-request,debug
low
Critical
2,615,872,879
godot
Error exporting apk in Godot Mobile
### Tested versions - Reproducible in godot 4.4.dev 3. ### System information Godot v4.4.dev3 - Android 11 - Single-window, 1 monitor - Vulkan (Mobile) - integrated Adreno (TM) 610 - (8 threads) ### Issue description - The apk export gives an error when placed outside the folder where the godot projects are located, I tested it in two download and document folders, both of which give the same error. ### Steps to reproduce - Open a project in Godot mobile 4.4.dev.3, press export, and go to the download folder, try to export, it gives an error. - Now try exporting again but this time in the folder where the project is located, you noticed that there is no error. - I don't know if it's because of Android 11. If so, it's better to put a message warning about Android's limitations. ### Minimal reproduction project (MRP) - N/D
bug,platform:android,topic:editor,topic:export
low
Critical
2,615,873,819
deno
[lint] ERR_MODULE_NOT_FOUND on a package without `main` or `module` (types only)
Version: Deno 2.0.3 In a case where a package has types only, e.g. `@apollo/utils.fetcher` is such, importing **without** adding type import directive results in confusion. My gut feeling is that `deno lint` could suggest type import. I expect my existing projects to have a lot of these. Possibly something similar to [consistent-type-imports](https://typescript-eslint.io/rules/consistent-type-imports/) could exist. index.ts: ```ts import { FetcherHeaders } from '@apollo/utils.fetcher'; let _foo: FetcherHeaders; ``` ``` ❯ deno run index.ts error: [ERR_MODULE_NOT_FOUND] Cannot find module 'file:///Users/slagiewka/Projects/deno-test/node_modules/.deno/@apollo+utils.fetcher@3.1.0/node_modules/@apollo/utils.fetcher/index.js' imported f rom 'file:///Users/slagiewka/Projects/deno-test/index.ts' at file:///Users/slagiewka/Projects/deno-test/index.ts:2:32 ❯ deno lint Checked 1 file ``` Works: ```ts import type { FetcherHeaders } from '@apollo/utils.fetcher'; let _foo: FetcherHeaders; ``` ``` ❯ deno run index.ts ❯ deno lint Checked 1 file ``` Running `node --experimental-strip-types index.ts` results in errors too, but with slightly different message (also confusing). ``` ❯ node --experimental-strip-types index.ts (node:45303) ExperimentalWarning: Type Stripping is an experimental feature and might change at any time (Use `node --trace-warnings ...` to show where the warning was created) node:internal/modules/esm/resolve:204 const resolvedOption = FSLegacyMainResolve(pkgPath, packageConfig.main, baseStringified); ^ Error: Cannot find package '/Users/slagiewka/Projects/deno-test/node_modules/@apollo/utils.fetcher' imported from /Users/slagiewka/Projects/deno-test/index.ts at legacyMainResolve (node:internal/modules/esm/resolve:204:26) at packageResolve (node:internal/modules/esm/resolve:827:14) at moduleResolve (node:internal/modules/esm/resolve:907:18) at defaultResolve (node:internal/modules/esm/resolve:1037:11) at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:650:12) at #cachedDefaultResolve (node:internal/modules/esm/loader:599:25) at ModuleLoader.resolve (node:internal/modules/esm/loader:582:38) at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:241:38) at ModuleJob._link (node:internal/modules/esm/module_job:132:49) { ```
needs discussion
low
Critical
2,615,898,042
godot
Entering specific shader names (e.g., 'icon_shader') causes Godot editor to freeze
### Tested versions v4.3.stable ### System information Windows 10. Godot v4.3.stable - Vulkan (Mobile) - NVIDIA GeForce GTX 1060 6GB - Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz (8 Threads) ### Issue description Setting the shader name of a Sprite2D with icon.svg as its texture to res://icon_shader.gdshader causes the Godot editor to freeze. Changing the shader name to res://shader_practice.gdshader (the same as the scene name) resolves this issue. ### Steps to reproduce 1. Create a new scene with Node2D as the root node. 2. Add a Sprite2D node as a child and set its texture by dragging and dropping icon.svg (the default icon) onto it. 3. Save the scene as shader_practice.tscn. 4. In the Material tab of the Sprite2D, create a new shader. 5. Set the shader's name to res://icon_shader.gdshader and create. At this point, the editor freezes. 6. However, when setting the shader's name to res://shader_practice.gdshader(same as the scene name), the shader can be created without issues. ### Minimal reproduction project (MRP) N/A
bug,topic:editor,topic:shaders
low
Minor
2,615,922,745
PowerToys
Remote Audio Output Capability
### Description of the new feature / enhancement I propose adding a feature to PowerToys that enables a Windows desktop's audio output to be played on another device, such as another PC, laptop, or smartphone. This feature would be extremely beneficial for users with faulty audio hardware on their primary device. Through a simple and secure pairing process, users could direct all system and application sounds to a secondary device on the same network, ensuring uninterrupted audio functionality. ### Scenario when this would be used? This feature is particularly useful in scenarios where a desktop computer cannot play sound due to broken or malfunctioning audio hardware. Users could redirect audio to their laptop or smartphone, facilitating a seamless audio experience for work, entertainment, or communication purposes. ### Supporting information _No response_
Needs-Triage
low
Critical
2,615,922,857
node
structuredClone() uses wrong context in vm.runInContext()
### Version v22.10.0 ### Platform ```text Darwin Silmaril.home 23.6.0 Darwin Kernel Version 23.6.0: Wed Jul 31 20:49:46 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T8103 arm64 ``` ### Subsystem vm ### What steps will reproduce the bug? ```js const vm = require('node:vm'); const context = vm.createContext({ structuredClone }); const result = vm.runInContext('structuredClone(new Error()) instanceof Error', context); console.log('match:', result); ``` ### How often does it reproduce? Is there a required condition? 100% ### What is the expected behavior? Why is that the expected behavior? `match: true` because `structuredClone` should use the context of the VM for the prototype of the produced `Error` instance. ### What do you see instead? ```match: false``` Additional investigation shows that the produced `Error` uses the prototype of the main runtime. Ie. a context escape, which would be a security error, if not for the disclaimer not to run untrusted code. ### Additional information This issue also applies to all other cloneable types like `Map`. Even `Object` fails `instanceof`: ```js structuredClone({}) instanceof Object // <= equals false inside VM ``` FYI, this causes problems when running tests in jest, which use the VM to run tests.
vm
low
Critical
2,615,923,609
PowerToys
Kontextmenu funktioniert nciht
### Microsoft PowerToys version 0.85.1 ### Installation method Microsoft Store ### Running as admin Yes ### Area(s) with issue? New+, PowerRename ### Steps to reproduce Unfortunately, reinstalling didn’t help. The only option displayed is 'Open in Terminal'." ### ✔️ Expected Behavior _No response_ ### ❌ Actual Behavior "Since the last PowerToys and Windows update, I've had the problem that certain apps no longer appear in the context menu—neither in the simple nor in the expanded context menu. Unfortunately, reinstalling didn’t help. The only option displayed is 'Open in Terminal'." ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,615,962,388
rust
Tracking issue for release notes of #128351: Lint against `&T` to `&mut T` and `&T` to `&UnsafeCell<T>` transmutes
This issue tracks the release notes text for #128351. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ```markdown # Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...) - [Lint against `&T` to `&mut T` and `&T` to `&UnsafeCell<T>` transmutes](https://github.com/rust-lang/rust/pull/128351) ``` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ```markdown ``` cc @ChayimFriedman2, @wesleywiser -- origin issue/PR authors and assignees for starting to draft text
T-lang,relnotes,relnotes-tracking-issue
low
Minor
2,615,968,645
go
cmd/compile: missed bounds check elimination when paired slices are resliced
``` go version devel go1.24-140308837f Mon Oct 21 15:30:47 2024 +0200 darwin/arm64 ``` When the lengths of two slices are "paired" (`dst = dst[:len(src)]`), the compiler is smart enough to apply ifs to both, but if you reslice them the pairing is lost. ``` func foo(src, dst []byte) { dst = dst[:len(src)] for len(src) >= 50 { _ = (*[50]byte)(src) // no bounds check _ = (*[50]byte)(dst) // no bounds check } } func bar(src, dst []byte) { dst = dst[:len(src)] for len(src) >= 50 { src = src[50:] // no bounds check dst = dst[50:] // bounds check! } } ```
Performance,NeedsInvestigation,compiler/runtime
low
Minor
2,615,983,402
deno
node worker_threads --conditions bug (e.g. breaks vitest for solidjs)
Version: Deno 2.0.3 Repro: - `deno run -A npm:create-solid` - Select: SolidStart, TypeScript, **with-vitest** example - `deno i` - `deno task test` ### Vitest 1.6.0 Error: <details open > <summary>Deno fails - vitest 1.6.0</summary> ``` ➜ deno task test Task test vitest run RUN v1.6.0 ❯ src/components/Counter.test.tsx (1 test | 1 failed) 4ms ❯ src/components/Counter.test.tsx > <Counter /> > increments value → Client-only API called on the server side. Run client-only code in onMount, or conditionally run client-only component with <Show>. ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Failed Tests 1 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ FAIL src/components/Counter.test.tsx > <Counter /> > increments value Error: Client-only API called on the server side. Run client-only code in onMount, or conditionally run client-only component with <Show>. ❯ notSup node_modules/.deno/solid-js@1.9.3/node_modules/solid-js/web/dist/server.js:1136:9 ❯ Module.render node_modules/.deno/@solidjs+testing-library@0.8.10/node_modules/@solidjs/testing-library/dist/index.js:55:65 53| } 54| }) : wrappedUi; 55| const dispose = hydrate ? solidHydrate(routedUi, container) : solidRender(routedUi, container); | ^ 56| mountedContainers.add({ container, dispose }); 57| const queryHelpers = getQueriesForElement(container, queries); ❯ src/components/Counter.test.tsx:8:29 ❯ eventLoopTick ext:core/01_core.js:175:7 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[1/1]⎯ Test Files 1 failed (1) Tests 1 failed (1) Start at 18:34:34 Duration 927ms (transform 71ms, setup 67ms, collect 184ms, tests 4ms, environment 302ms, prepare 80ms) ``` </details> Now trying to run in same folder with `npm run test`, `bun run test` or `node --run test` (Node 22) does work <details> <summary>Npm works - vitest 1.6.0</summary> ```sh ➜ npm run test > test > vitest run RUN v1.6.0 ✓ src/components/Counter.test.tsx (1) ✓ <Counter /> (1) ✓ increments value Test Files 1 passed (1) Tests 1 passed (1) Start at 18:36:43 Duration 566ms (transform 72ms, setup 45ms, collect 152ms, tests 39ms, environment 189ms, prepare 43ms) ``` </details> ### Vitest 2.1.3 (or vitest 3.x) - Same thing Tried updating the Vitest to 2.x `"vitest": "^2.0.0"` (upstream [PR](https://github.com/solidjs/solid-start/pull/1665)), and it's the same result: <details> <summary>Deno fails - vitest 2.1.3</summary> ``` ➜ deno task test Task test vitest run RUN v2.1.3 ❯ src/components/Counter.test.tsx (1) ❯ <Counter /> (1) × increments value ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Failed Tests 1 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ FAIL src/components/Counter.test.tsx > <Counter /> > increments value Error: Client-only API called on the server side. Run client-only code in onMount, or conditionally run client-only component with <Show>. ❯ notSup node_modules/.deno/solid-js@1.9.3/node_modules/solid-js/web/dist/server.js:1136:9 ❯ Module.render node_modules/.deno/@solidjs+testing-library@0.8.10/node_modules/@solidjs/testing-library/dist/index.js:55:65 53| } 54| }) : wrappedUi; 55| const dispose = hydrate ? solidHydrate(routedUi, container) : solidRender(routedUi, container); | ^ 56| mountedContainers.add({ container, dispose }); 57| const queryHelpers = getQueriesForElement(container, queries); ❯ src/components/Counter.test.tsx:8:29 ❯ Object.runMicrotasks ext:core/01_core.js:672:26 ❯ processTicksAndRejections ext:deno_node/_next_tick.ts:57:10 ❯ runNextTicks ext:deno_node/_next_tick.ts:74:3 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[1/1]⎯ Test Files 1 failed (1) Tests 1 failed (1) Start at 19:01:16 Duration 861ms (transform 80ms, setup 98ms, collect 201ms, tests 3ms, environment 332ms, prepare 48ms) ``` </details> with npm it works <details> <summary>Npm works - vitest 2.1.3</summary> ``` ➜ npm run test > test > vitest run RUN v2.1.3 (node:99528) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) stdout | src/components/Counter.test.tsx > <Counter /> > increments value TESTVAR: undefined ✓ src/components/Counter.test.tsx (1) ✓ <Counter /> (1) ✓ increments value Test Files 1 passed (1) Tests 1 passed (1) Start at 19:01:39 Duration 709ms (transform 82ms, setup 75ms, collect 139ms, tests 35ms, environment 201ms, prepare 73ms) ``` </details>
bug,node compat,node resolution
low
Critical
2,615,988,704
PowerToys
Tabs not saved with File Explorer window in the Workspaces utility
### Description of the new feature / enhancement Workspaces is potentially a great utility, especially when managing files with multiple Windows Explorer windows. The drawback is that a Workspaces cannot save the configured subfolders or tabs in the individual explorer windows. Unfortunately, Workspaces only saves the generic configuration of the large and small explorer windows, both reset to the root folder or directory, with no tabs or proper destination folders for each tab as configured in the layout when captured. The feature could have incredible power for file transfers and system organization if only the utility stored more info including the tabs and subfolders configured for each Explorer window. ### Scenario when this would be used? For example, we often use two windows to transfer photos from our mobile phone. The configuration includes a half-screen Explorer window on the left and a quarter-screen Explorer window on the lower right. The larger window on the left displays our mobile phone photos in extra-large format. The smaller window on the lower right has three tabs. One tab for a family events subfolder in chronological order, another tab for a personal property (purchases) subfolder, and a third tab for a special purposes subfolder. This configuration gives us quick access to all the folders needed for quickly archiving the photos from our mobile phone to these folders and their related subfolders. ### Supporting information _No response_
Needs-Triage,Product-Workspaces
low
Minor
2,615,995,449
svelte
False positive TypeScript error when spreading attributes on div HTML element
### Describe the bug There's a type mismatch between the `<div>` element type and `HTMLAttributes<HTMLDivElement>` ### Reproduction Type this into a Svelte file in VS Code. Notice the error on the spread of `restProps` on the `<div>`. ```svelte <script lang="ts"> import type { HTMLAttributes } from 'svelte/elements'; const { children, ...restProps }: HTMLAttributes<HTMLDivElement> = $props(); </script> <div {...restProps}> {@render children?.()} </div> ``` I've encountered a few issues related to spreading of props. Most are fairly simple to work around by just `Omit<>`ing the offending attribute and typing it correctly. As a catch all, it might be a good idea to create a TypeScript test for all of the elements in `import('svelte/element').SvelteHTMLElements`. ### Logs ```shell Types of property 'hidden' are incompatible. Type 'boolean | "" | "until-found" | null | undefined' is not assignable to type 'boolean | null | undefined'. Type '""' is not assignable to type 'boolean | null | undefined'.ts(2345) ``` ### System Info ```shell Svelte: ^5.1.3 => 5.1.3 ``` ### Severity annoyance
types / typescript
low
Critical
2,616,013,962
excalidraw
feature request: excalidraw to mermaid
i saw github markdown supports mermaid now, i'd like to reduce the friction from going to excalidraw to github issue, that could help
enhancement
low
Minor
2,616,040,391
node
`fs.rmSync(path, { recursive: true, maxRetries: manyManyRetries})` throws on `ENFILE`
### Version v22.8.0 ### Platform ```text Linux tumba 6.11.4-gentoo-yuran-r7 #8 SMP Sat Oct 19 16:45:43 +08 2024 x86_64 Intel(R) Core(TM)2 Duo CPU E6550 GenuineIntel GNU/Linux ``` ### Subsystem fs ### What steps will reproduce the bug? Set low limits or high `FILE_COUNT`, run the following: ```mjs import fs from 'node:fs'; import { readFile } from 'node:fs/promises'; const OUTPUT_DIRECTORY = "/tmp/emfile-check"; const FILE_COUNT = 150_000; // adjust this according to ulimit -n, ulimit -Hn, /proc/sys/fs/file-max, etc. function generateFiles() { fs.rmSync(OUTPUT_DIRECTORY, { recursive: true, force: true, maxRetries: 8192 }); fs.mkdirSync(OUTPUT_DIRECTORY, { recursive: true }); for (let i = 0; i < FILE_COUNT; i++) { const fileName = `file_${i}.js`; const fileContent = `// This is file ${i}`; fs.writeFileSync(`${OUTPUT_DIRECTORY}/${fileName}`, fileContent); } } function generateEmFileError() { return Promise.all( Array.from({ length: FILE_COUNT }, (_, i) => { const fileName = `file_${i}.js`; return readFile(`${OUTPUT_DIRECTORY}/${fileName}`); }) ); } console.log(`Generating ${FILE_COUNT} files in ${OUTPUT_DIRECTORY}...`); generateFiles(); console.log("Checking that this number of files would cause an EMFILE error..."); await generateEmFileError() .then(() => { throw new Error("Test failure: not enough files to encounter EMFILE."); }) .catch(error => { if (error.code === "EMFILE") { console.log("Successfully got intentional EMFILE:", error.message); } else if (error.code === "ENFILE") { console.log("Successfully got intentional ENFILE:", error.message); } else { console.error("Failed with unknown error:", error.message); throw error; } }) .finally(() => { console.log(`Attempting to remove ${OUTPUT_DIRECTORY}`); fs.rmSync(OUTPUT_DIRECTORY, { recursive: true, force: true, maxRetries: 8192 }); console.log(`Successfully removed ${OUTPUT_DIRECTORY}`); }); ``` ### How often does it reproduce? Is there a required condition? Might be flaky depending on system configuration (under some conditions exhausting fd limits may render system unresponsive). ### What is the expected behavior? Why is that the expected behavior? ``` Generating 150000 files in /tmp/emfile-check... Checking that this number of files would cause an EMFILE error... Successfully got intentional ENFILE: ENFILE: file table overflow, open '/tmp/emfile-check/file_63275.js' Attempting to remove /tmp/emfile-check Successfully removed /tmp/emfile-check ``` We should encounter the ENFILE via `generateEmFileError` function, catch the error, and remove the directory. Directory removal might encounter errors due to unfinished jobs, but it should retry with 100ms delay 8192 times (more than 13 minutes). ### What do you see instead? ``` Generating 150000 files in /tmp/emfile-check... Checking that this number of files would cause an EMFILE error... Successfully got intentional ENFILE: ENFILE: file table overflow, open '/tmp/emfile-check/file_63275.js' Attempting to remove /tmp/emfile-check node:fs:1502 const result = binding.readdir( ^ Error: ENFILE: file table overflow, scandir '/tmp/emfile-check' at readdirSync (node:fs:1502:26) at _rmdirSync (node:internal/fs/rimraf:249:29) at rimrafSync (node:internal/fs/rimraf:192:7) at Object.rmSync (node:fs:1247:10) at file:///tmp/repro/t.mjs:52:8 at <anonymous> at async file:///tmp/repro/t.mjs:36:1 { errno: -23, code: 'ENFILE', syscall: 'scandir', path: '/tmp/emfile-check' } Node.js v22.8.0 ``` ### Additional information According to the [docs](https://nodejs.org/api/fs.html#fsrmsyncpath-options), it must continue in case of `ENFILE`.
fs
low
Critical
2,616,046,881
deno
vite/node project can't use i.e. `jsr:@db/sqlite` in package.json/node-compat mode. Migrating to deno.json will break resolution
Version: Deno 2.0.3 Say I try to run a project, like a new solid start example - deno run -A npm:create-solid - Select SolidStart, TypeScript, Basic And then I port package.json over to deno.json ``` { "tasks": { "dev": "vinxi dev", "build": "vinxi build", "start": "vinxi start", "version": "vinxi version" }, "imports": { "@solidjs/meta": "npm:@solidjs/meta@^0.29.4", "@solidjs/router": "npm:@solidjs/router@^0.14.10", "@solidjs/start": "npm:@solidjs/start@^1.0.9", "solid-js": "npm:solid-js@^1.9.2", "vinxi": "npm:vinxi@^0.4.3" } } ``` Now it'll break because it can't resolve any dependencies from these npm: packages that doesn't have the "npm:" ``` ➜ Local: http://localhost:3000/ ➜ Network: use --host to expose 8:13:04 PM [vite] Pre-transform error: Failed to load url solid-js/web (resolved id: solid-js/web) in /Users/admin/repos/deno-kitchensink/solid-start-basic/src/entry-server.tsx. Does the file exist? (x5) 8:13:04 PM [vite] Pre-transform error: Failed to load url @solidjs/start/server (resolved id: @solidjs/start/server) in /Users/admin/repos/deno-kitchensink/solid-start-basic/src/entry-server.tsx. Does the file exist? 8:13:04 PM [vite] Error when evaluating SSR module /src/entry-server.tsx: failed to import "solid-js/web" |- Error: Cannot find module 'solid-js/web' imported from '/Users/admin/repos/deno-kitchensink/solid-start-basic/src/entry-server.tsx' at nodeImport (/Users/admin/Library/Caches/deno/npm/registry.npmjs.org/vite/5.4.10/dist/node/chunks/dep-BWSbWtLw.js:53047:19) at ssrImport (/Users/admin/Library/Caches/deno/npm/registry.npmjs.org/vite/5.4.10/dist/node/chunks/dep-BWSbWtLw.js:52914:22) at undefined ``` Staying in package.json doesn't allow usage of `jsr:@db/sqlite`
bug,bundler-resolution
low
Critical
2,616,063,121
pytorch
Support `attn_mask` in `jagged_scaled_dot_product_attention`
### 🚀 The feature, motivation and pitch Currently, providing an attention mask argument to [jagged_scaled_dot_product_attention](https://github.com/pytorch/pytorch/blob/main/torch/nested/_internal/sdpa.py#L677) is not supported (see https://github.com/pytorch/pytorch/blob/main/torch/nested/_internal/sdpa.py#L67). I don't know how technically feasibly this is, as I'm not familiar with the structure of the various SDPA backends, but theoretically I'm hoping we could allow passing a nested tensor attn_mask (strided layout only, not jagged, b/c by definition the nested tensor will have two ragged dimensions). Ideally this would be supported at training time, not just inference (e.g. with autograd support). ### Alternatives For use cases where attention masks are required but inputs are jagged, I believe this only leaves the option of converting everything to a dense padded tensor, which is not very efficient. ### Additional context I'm new to the NJT APIs, so if there is a better way to accomplish this that already exists, please lmk! cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @erichan1 @mikaylagawarecki @crcrpar @mcarilli @janeyx99
feature,triaged,module: nestedtensor,module: sdpa
low
Major
2,616,072,764
tauri
[bug] Service worker not registered in Android
### Describe the bug I have Svelte app with custom service worker, it works fine in Linux app, but in Android I'm getting this error: ``` 10-26 19:45:52.643 8304 8304 E Tauri/Console: File: http://tauri.localhost/src/main.ts - Line 24 - Msg: Service worker registration failed with TypeError: Failed to register a ServiceWorker for scope ('http://tauri.localhost/') with script ('http://tauri.localhost/service-worker.js'): An unknown error occurred when fetching the script. TypeError: Failed to register a ServiceWorker for scope ('http://tauri.localhost/') with script ('http://tauri.localhost/service-worker.js'): An unknown error occurred when fetching the script. ``` I'm suspecting, that problem might be with URL (scope), as service worker is installed only from secure/https scope or from http://localhost (later is kind of exception mainly for development). He it is http://tauri.localhost - I think this may be a problem? ### Reproduction My project is at https://github.com/izderadicka/audioserve-web/tree/tauri - so it's now on branch tauri. 1. Clone project 2. Switch to tauri branch 3. `cargo tauri android dev` - the device is Android 14 emulated 4. The problem with registering service worker should appear in terminal listing ### Expected behavior Service worker registration should work - it does in Linux desktop app. ### Full `tauri info` output ```text $ cargo tauri info [✔] Environment - OS: Ubuntu 22.4.0 x86_64 (X64) ✔ webkit2gtk-4.1: 2.44.3 ✔ rsvg2: 2.52.5 ✔ rustc: 1.81.0 (eeb90cda1 2024-09-04) ✔ cargo: 1.81.0 (2dbb1af80 2024-08-20) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN) - node: 20.18.0 - yarn: 1.22.11 - npm: 10.8.2 [-] Packages - tauri 🦀: 2.0.6 - tauri-build 🦀: 2.0.2 - wry 🦀: 0.46.3 - tao 🦀: 0.30.3 - tauri-cli 🦀: 2.0.3 - @tauri-apps/api : not installed! - @tauri-apps/cli : 2.0.3 (outdated, latest: 2.0.4) [-] Plugins - tauri-plugin-log 🦀: 2.0.1 - @tauri-apps/plugin-log : not installed! [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:5000/ - framework: Svelte - bundler: Rollup ``` ### Stack trace _No response_ ### Additional context _No response_
type: bug,status: needs triage
low
Critical
2,616,101,638
ollama
use arm64 extensions ?
like neon64 and way others .. ? greets ..
feature request
low
Major
2,616,109,844
vscode
Insiders API Change: no NotebookData class in NotebookSerializer.serializeNotebook handed over (only same Interface)
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> Version: 1.95.0-insider (user setup) Commit: 38dc6ac5a771cc94bde1344722bb2d02c80096ea Date: 2024-10-25T22:08:52.005Z Electron: 32.2.1 ElectronBuildId: 10427718 Chromium: 128.0.6613.186 Node.js: 20.18.0 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22631 Steps to Reproduce: 1. implement NotebookSerializer interface in an extension 2. check first args of serializeNotebook with `notebookData instanceOf vscode.NotebookData` 3. with newest VSCode Insiders this checks returns false. with latestet VSCode this checks returns true There were changes in NoteBookSerializer, maybe these produce this changed behaviour: https://github.com/microsoft/vscode/commit/c4645ea0ec0093893e7c16b1f628128d439da671 (worker uses structuredClone) Code which produces this error: https://github.com/AnWeber/httpbook/blob/ef2aef382f6e906ca09bafede1bd7591f15125a0/src/extension/notebook/httpNotebookSerializer.ts#L190 Bug of User with Insiders: https://github.com/AnWeber/httpbook/issues/134 Workaround: I willl implement a typeguard to check if object matches API of NotebookData Class (https://github.com/AnWeber/httpbook/blob/526a9f4d15e99d239cd73c03e124b0bce94644bc/src/extension/notebook/httpNotebookSerializer.ts#L227)
under-discussion,confirmed,notebook-api
low
Critical
2,616,112,745
rust
Variance not computed for concrete GAT references in struct fields
Similar to https://github.com/rust-lang/rust/issues/114221#issuecomment-1656912412. I tried this code: ```rust trait Foo { type Ref<'a>; } struct Bar; impl Foo for Bar { type Ref<'a> = &'a u8; } struct Baz<'a>(<Bar as Foo>::Ref<'a>); fn f<'a>(s: Baz<'static>) { let _: Baz<'a> = s; } ``` I expected to see this happen: lifetime check passes Instead, this happened: ``` error: lifetime may not live long enough --> src/main.rs:12:36 | 12 | fn f<'a>(s: Baz<'static>) { let _: Baz<'a> = s; } | -- lifetime `'a` defined here ^^^^^^^ type annotation requires that `'a` must outlive `'static` | = note: requirement occurs because of the type `Baz<'_>`, which makes the generic argument `'_` invariant = note: the struct `Baz<'a>` is invariant over the parameter `'a` = help: see <https://doc.rust-lang.org/nomicon/subtyping.html> for more information about variance ``` Although lifetime variance does get computed when the GAT type is directly referenced in the parameter type: ```rust fn f<'a>(s: <Bar as Foo>::Ref<'static>) { let _: <Bar as Foo>::Ref<'a> = s; } // check passes ``` Edit: Sames limitation on types: ```rust trait Foo { type GAT<T>; } struct Bar; impl Foo for Bar { type GAT<T> = T; } fn f<'a>(s: <Bar as Foo>::GAT<&'static ()>) { let _: <Bar as Foo>::GAT<&'a ()> = s; } // this passes the check struct Baz<T>(<Bar as Foo>::GAT<T>); fn f1<'a>(s: Baz<&'static ()>) { let _: Baz<&'a ()> = s; } // but this doesn't ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.84.0-nightly (c1db4dc24 2024-10-25) binary: rustc commit-hash: c1db4dc24267a707409c9bf2e67cf3c7323975c8 commit-date: 2024-10-25 host: aarch64-apple-darwin release: 1.84.0-nightly LLVM version: 19.1.1 ``` <!-- Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your environment. E.g. `RUST_BACKTRACE=1 cargo build`. --> <details><summary>Backtrace</summary> <p> ``` <backtrace> ``` </p> </details>
C-discussion,A-variance,T-types,A-GATs
low
Critical
2,616,116,856
langchain
DOC: The bedrock documentation is under "AWS" it needs to change to the earlier "Bedrock"
### URL https://python.langchain.com/docs/how_to/dynamic_chain/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: https://python.langchain.com/docs/how_to/dynamic_chain/ and many others all need to say Bedrock - because AWS is a wide scope and it includes SageMaker as well ### Idea or request for content: https://python.langchain.com/docs/how_to/dynamic_chain/ needs to have the documentation under "Bedrock" TAB like it was before and also can we move it towards the left side
🤖:docs,stale
low
Minor
2,616,121,127
godot
Delayed crash after Animations with same name are added to the same AnimationLibrary
### Tested versions reproducible in v4.3.stable.nixpkgs [77dcf97d8] ### System information Godot v4.3.stable (77dcf97d8) - NixOS #1-NixOS SMP PREEMPT_DYNAMIC Thu Sep 12 09:13:13 UTC 2024 - X11 - Vulkan (Mobile) - integrated Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i5-1240P (16 Threads) ### Issue description Low information core-dump when adding and hacking with Animation objects while making a multiplayer plugin. ``` ================================================================ handle_crash: Program crashed with signal 11 Engine version: Godot Engine v4.3.stable.nixpkgs (77dcf97d82cbfe4e4615475fa52ca03da645dbd8) Dumping the backtrace. Please include this when reporting the bug to the project developer. [1] /nix/store/sl141d1g77wvhr050ah87lcyz2czdxa3-glibc-2.40-36/lib/libc.so.6(+0x40620) [0x7ffff7d0d620] (??:0) [2] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x2cb27a4] (??:0) [3] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x54229e] (??:0) [4] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x4324cd1] (??:0) [5] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x4325390] (??:0) [6] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x4299b0a] (??:0) [7] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x7cbce4] (??:0) [8] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x68753c] (??:0) [9] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x234258f] (??:0) [10] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x2a0a4c5] (??:0) [11] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x429d1f8] (??:0) [12] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x22cfe18] (??:0) [13] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x22cfdc4] (??:0) [14] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x2326fbf] (??:0) [15] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x42144f] (??:0) [16] /nix/store/sl141d1g77wvhr050ah87lcyz2czdxa3-glibc-2.40-36/lib/libc.so.6(+0x2a27e) [0x7ffff7cf727e] (??:0) [17] /nix/store/sl141d1g77wvhr050ah87lcyz2czdxa3-glibc-2.40-36/lib/libc.so.6(__libc_start_main+0x89) [0x7ffff7cf7339] (??:0) [18] /nix/store/ss26dyx50lsqh5jdzyq621iq6vcdgxk0-godot4-4.3-stable/bin/godot4() [0x43e605] (??:0) -- END OF BACKTRACE -- ================================================================ ``` ### Steps to reproduce I was unintentionally adding Animations with the same name to the same AnimationLibrary due to the fact that they were not set as local_to_scene when they had been instantiated in a scene. This seems to have created a bad pointer in the system which causes a crash on evaluation of `$AnimationPlayer.current_animation_length` after a delay. ### Minimal reproduction project (MRP) This is a very minimal project that reproduces the crash immediately [animationcrash.zip](https://github.com/user-attachments/files/17531789/animationcrash.zip) ```GDScript extends Node2D var a1 : Animation var a2 : Animation func _ready(): var alib : AnimationLibrary = $AnimationPlayer.get_animation_library("liblib") print(alib, $AnimationPlayer2.get_animation_library("liblib")) var at = alib.get_animation("temp") a1 = at.duplicate() a1.track_insert_key(0, 0.0, Vector2(1,1)) alib.add_animation("anim1", a1) $AnimationPlayer.play("liblib/anim1") $AnimationPlayer.pause() var at2 = alib.get_animation("temp") a2 = at2.duplicate() a2.track_insert_key(0, 0.0, Vector2(1,1)) alib.add_animation("anim1", a2) print("Animation list ", alib.get_animation_list()) prints(a1.length, a2.length, $AnimationPlayer.current_animation_length) # The Crash happens only on the delayed evaluation of current_animation_length await get_tree().create_timer(0.1).timeout prints(a1.length, a2.length, $AnimationPlayer.current_animation_length) ```
bug,needs testing,crash,topic:animation
low
Critical
2,616,129,752
vscode
Comments broken in markdown codeblock
create markdown file: ````md ```ts function abc() { return; } ``` * ```ts function abc() { return; } ``` ```` press comment keybind (ctrl+/) on the `return;` lines Expected: TS comment `//` added before the `return;` ✅ ![Image](https://github.com/user-attachments/assets/a259f2af-62a4-4588-9bf2-b872f1222bf9) Actual: `return;` surrounded with html comment `<!-- -->` ❌ ![Image](https://github.com/user-attachments/assets/c976b71e-da06-457b-8708-72acc0a3ac8f) This is due to the comment handling code it checks for the language at the start of the line but then places the comment after the last whitespace indent Fix would be to check for the language after the last whitespace indent OR place the comment at the start of the line https://github.com/microsoft/vscode/blob/main/src/vs/editor/contrib/comment/browser/lineCommentCommand.ts Does this issue occur when all extensions are disabled?: Yes - VS Code Version: 1.94.2 - OS Version: Windows 11
feature-request,editor-comments
low
Critical
2,616,131,322
rust
incorrect suggestion for `use<...>` bound that captures multiple lifetimes
### Code ```rust struct T; impl T { fn iter(&self, t: &T) -> impl Iterator { std::iter::once((self, t)) } } ``` ### Current output `cargo check` currently suggests ``` error[E0700]: hidden type for `impl std::iter::Iterator` captures lifetime that does not appear in bounds --> src/main.rs:5:3 | 4 | fn iter(&self, t: &T) -> impl Iterator { | ----- ------------- opaque type defined here | | | hidden type `std::iter::Once<(&T, &T)>` captures the anonymous lifetime defined here 5 | std::iter::once((self, t)) | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | help: add a `use<...>` bound to explicitly capture `'_` | 4 | fn iter(&self, t: &T) -> impl Iterator + use<'_> { | +++++++++ ``` if i apply that suggestion it still complains about another implicitly captured lifetime and suggests this. ``` help: add `'_` to the `use<...>` bound to explicitly capture it | 4 | fn iter(&self, t: &T) -> impl Iterator + use<'_, '_> { | ++++ ``` if i apply this suggestion the code doesn't compile either with this error ``` error: cannot capture parameter `'_` twice --> src/main.rs:8:47 | 4 | fn iter(&self, t: &T) -> impl Iterator + use<'_, '_> { | ^^ -- parameter captured again here ``` and it still complains about a captured lifetime that doesn't appear in the bounds with another helpful suggestion ``` help: add `'_` to the `use<...>` bound to explicitly capture it | 4 | fn iter(&self, t: &T) -> impl Iterator + use<'_, '_, '_> { | ++++ ``` ### Desired output ``` help: add a `use<...>` bound to explicitly capture the lifetimes | 4 | fn iter<'a, 'b>(&'a self, t: &'b T) -> impl Iterator + use<'a, 'b> { | ++++++++ +++ +++ +++++++++++++ ``` ### Rationale and extra context _No response_ ### Other cases _No response_ ### Rust Version rustc 1.84.0-nightly (c1db4dc24 2024-10-25) binary: rustc commit-hash: c1db4dc24267a707409c9bf2e67cf3c7323975c8 commit-date: 2024-10-25 host: x86_64-unknown-linux-gnu release: 1.84.0-nightly LLVM version: 19.1.1 ### Anything else? _No response_
A-diagnostics,T-compiler,A-suggestion-diagnostics,D-incorrect,F-precise_capturing
low
Critical
2,616,131,431
TypeScript
Should validate iteree when using `using`/`await using` in loop initializer
### 🔍 Search Terms explicit resource management, using, await using, loop initializer, Symbol.dispose, Symbol.asyncDispose, iteration ### ✅ Viability Checklist - [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code - [x] This wouldn't change the runtime behavior of existing JavaScript code - [x] This could be implemented without emitting different JS based on the types of the expressions - [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.) - [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types - [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals ### ⭐ Suggestion In this code, ```ts export function throwsException() { for (using x of [1, 2, 3]) { console.log(x); } } ``` I expect to see an error. It doesn't currently show one ([playground link](https://www.typescriptlang.org/play/?target=99&ts=5.7.0-dev.20241026#code/KYDwDg9gTgLgBAMwK4DsDGMCWEVxgCyggHcBnAURDWDCxwAoBKOAbwCg5PFo56lTMKAOZwQcCAjgBtAIwAaOACYFAZgC6zdl21w0OUhAA2wAHSGIQ+iEYBuDlwC+bB0A)) If I did something like `using x = 3;` I would get something like `The initializer of a 'using' declaration must be either an object with a '[Symbol.dispose]()' method, or be 'null' or 'undefined'`. I think a similar error message would be appropriate here too. ### 📃 Motivating Example ```ts export function throwsException() { for (using x of [1, 2, 3]) { console.log(x); } } ``` is preventable exception-throwing code. And validating it a logical extension of validating ```ts export function throwsException() { for (const x of [1, 2, 3]) { using y = x; // TS ERROR: The initializer of a 'using' declaration must be either an object with a '[Symbol.dispose]()' method, or be 'null' or 'undefined' console.log(x); } } ``` ### 💻 Use Cases 1. What do you want to use this for? Safer usage explicit resource management features. 2. What shortcomings exist with current approaches? No validation of explicit resource management declarations in loop initializers 3. What workarounds are you using in the meantime? N/A.
Needs Investigation
low
Critical
2,616,147,834
PowerToys
custom time stamps in PowerToys for Shamsi calendar and Lunar Hijri calendar
### Microsoft PowerToys version 0.85.0 ### Installation method GitHub ### Running as admin None ### Area(s) with issue? PowerToys Run ### Steps to reproduce Hi, How to create custom time stamps in PowerToys Run like this for Shamsi calendar and Lunar Hijri calendar? ۱۴۰۳/۰۸/۰۵ - ۰۹:۱۸ ق‌.ظ یکشنبه - ۶ آبان ۱۴۰۳ and ۱۴۴۶/۰۴/۲۳ الأحد - ٢٣ ربيع الثاني ١٤٤٦ in description for "PowerToys Run", It tells about this but when I enable PowerToys Run, I have Gregorian calendar only but I need all these dates. Thanks. ### ✔️ Expected Behavior I want all calendars like Shamsi , Lunar and Gregorian in PowerToys Run. ### ❌ Actual Behavior I have Gregorian calendar only! ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,616,148,973
tauri
[bug] Failed to assemble APK
### Describe the bug Failed to build the app for Android with Svelte template ### Reproduction cargo create-tauri-app Choose Typescript/Javascript pnpm as package manager Svelte as UI template Typescript cd tauri-app pnpm i cargo tauri android init cargo tauri android dev ### Expected behavior build and run successfully ### Full `tauri info` output ```text [✔] Environment - OS: Arch Linux Rolling Release x86_64 (X64) ✔ webkit2gtk-4.1: 2.46.2 ✔ rsvg2: 2.59.1 ✔ rustc: 1.81.0 (eeb90cda1 2024-09-04) ✔ cargo: 1.81.0 (2dbb1af80 2024-08-20) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN) - node: 22.9.0 - pnpm: 9.12.1 - yarn: 1.22.22 - npm: 10.9.0 [-] Packages - tauri 🦀: 2.0.6 - tauri-build 🦀: 2.0.2 - wry 🦀: 0.46.3 - tao 🦀: 0.30.3 - tauri-cli 🦀: 2.0.4 - @tauri-apps/api : 2.0.3 - @tauri-apps/cli : 2.0.5 [-] Plugins - tauri-plugin-shell 🦀: 2.0.2 - @tauri-apps/plugin-shell : 2.0.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../build - devUrl: http://localhost:1420/ - framework: Svelte - bundler: Vite ``` ### Stack trace ```text Finished `dev` profile [unoptimized + debuginfo] target(s) in 36.51s Info symlinking lib "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" in jniLibs dir "/home/fabric/tauri-app/src-tauri/gen/android/app/src/main/jniLibs/x86_64" Info "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" requires shared lib "libandroid.so" Info "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" requires shared lib "libdl.so" Info "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" requires shared lib "liblog.so" Info "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" requires shared lib "libm.so" Info "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" requires shared lib "libc.so" Info symlink at "/home/fabric/tauri-app/src-tauri/gen/android/app/src/main/jniLibs/x86_64/libtauri_app_lib.so" points to "/home/fabric/tauri-app/src-tauri/target/x86_64-linux-android/debug/libtauri_app_lib.so" FAILURE: Build failed with an exception. * Where: Settings file '/home/fabric/tauri-app/src-tauri/gen/android/settings.gradle' line: 3 * What went wrong: A problem occurred evaluating settings 'android'. > BUG! exception in phase 'semantic analysis' in source unit '_BuildScript_' Unsupported class file major version 67 * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. > Get more help at https://help.gradle.org. BUILD FAILED in 427ms Error Failed to assemble APK: command ["/home/fabric/tauri-app/src-tauri/gen/android/gradlew", "--project-dir", "/home/fabric/tauri-app/src-tauri/gen/android"] exited with code 1: ``` ### Additional context _No response_
type: documentation
low
Critical
2,616,159,557
neovim
Setting 'winbar' on 'BufWinEnter' can unexpectedly cause window with 'winfixheight' to resize on opening/closing other windows
### Problem Setting 'winbar' on 'BufWinEnter' can unexpectedly cause window with 'winfixheight' to resize on opening/closing other windows. Screen recording: https://github.com/user-attachments/assets/89b167bf-7725-464a-8569-047ae7edb517 ### Steps to reproduce 1. Create the `minimal.lua` file: ```lua vim.api.nvim_create_autocmd('BufWinEnter' , { callback = function() vim.wo.winbar = ' ' end }) ``` 2. `nvim --clean -u minimal.lua +copen +wincmd\ p` 3. Inside nvim, type `<C-w>v` to open a vertical split, then `<C-w>s` several times to observe the bug I use quickfix window in the example but it happens to any window with 'winfixheight' set. ### Expected behavior Window size of the quickfix window should not change. ### Nvim version (nvim -v) v0.11.0-dev-1046+g25b53b593e ### Vim (not Nvim) behaves the same? N/A ### Operating system/version Linux 6.11.5 ### Terminal name/version Kitty 0.36.4 + tmux next-3.6 ### $TERM environment variable tmux-256color ### Installation AUR
bug,ui,statusline
low
Critical
2,616,229,028
deno
`deno lint` produces unreadable output for minified files.
Version: Deno 2.0.3 This came up in https://github.com/denoland/deno/issues/26519#issuecomment-2439774489. While I think https://github.com/denoland/deno/issues/26573 will minimize impact of the problem, `deno lint` should still be smarter about printing code snippets. If the diagnostic is produced for a rather big column no (eg. `>=200`), `deno lint` should put a placeholder instead of printing out the whole code snippet with highlighted range. Additionally a hint should be provided that this situation suggests a minified file is being linted, which usually is not what user wanted. The hint should suggest to exclude the file in `deno.json` config.
bug
low
Minor
2,616,231,131
godot
Can't color pick outside editor window in single window mode
### Tested versions * 4.2 - works in single window mode * 4.3 - doesn't work * 4.4.dev - doesn't work ### System information Windows 11 - Vulkan (Forward+) ### Issue description The `ColorPicker`'s doesn't pick colors when the mouse is outside the editor's windows when `single_window_mode` is enabled. It works fine when `single_window_mode` is disabled. Bisected, regression by 385284311ab63b787448b6387e3bd046aeb15032 ### Steps to reproduce * Enable `single_window_mode` in editor settings. * Open the picker in any exported `Color` var and use the eyedropper. * Move the mouse outside the editor's windows. The color doesn't change.in ### Minimal reproduction project (MRP) N/A
bug,regression,topic:gui
low
Minor
2,616,234,948
ollama
Unrooted Termux install process
### What is the issue? I am endeavouring to set up an `ollama` server on my unrooted Termux host environment, please refer: https://github.com/ollama/ollama/issues/7349#issuecomment-2439776813 and https://github.com/ollama/ollama/issues/7292#issuecomment-2439781839 @vpnry @dhiltgen So, the process is: ## Process 1. git clone --depth 1 https://github.com/ollama/ollama.git 2. cd ollama 3. go generate ./... 4. delete the two lines of code, as per: https://github.com/ollama/ollama/issues/7292#issuecomment-2427773036 5. and write. 6. go build . 7. ./ollama serve & 8. and prosper! Is that the go? ## Device details: ```zsh ❯ termux-info Termux Variables: TERMUX_API_VERSION=0.50.1 TERMUX_APK_RELEASE=F_DROID TERMUX_APP_PACKAGE_MANAGER=apt TERMUX_APP_PID=3465 TERMUX_IS_DEBUGGABLE_BUILD=0 TERMUX_MAIN_PACKAGE_FORMAT=debian TERMUX_VERSION=0.118.1 TERMUX__USER_ID=0 Packages CPU architecture: aarch64 Subscribed repositories: # sources.list deb https://packages-cf.termux.dev/apt/termux-main stable main # sources.list.d/pointless.list deb https://its-pointless.github.io/files/21 termux extras # sources.list.d/ivam3-termux-packages.list deb [trusted=yes arch=all] https://ivam3.github.io/termux-packages stable extras # x11-repo (sources.list.d/x11.list) deb https://packages-cf.termux.dev/apt/termux-x11 x11 main # tur-repo (sources.list.d/tur.list) deb https://tur.kcubeterm.com tur-packages tur tur-on-device tur-continuous # root-repo (sources.list.d/root.list) deb https://packages-cf.termux.dev/apt/termux-root root stable # sources.list.d/rendiix.list deb https://rendiix.github.io android-tools termux Updatable packages: command-not-found/stable 2.4.0-48 aarch64 [upgradable from: 2.4.0-47] libgit2/stable 1.8.3 aarch64 [upgradable from: 1.8.2] termux-tools version: 1.44.1 Android version: 14 Kernel build information: Linux localhost 5.15.123-android13-8-28577532-abX910XXS4BXG5 #1 SMP PREEMPT Thu Jul 11 02:48:07 UTC 2024 aarch64 Android Device manufacturer: samsung Device model: SM-X910 LD Variables: LD_LIBRARY_PATH=:/data/data/com.termux/files/home/.local/lib/ollama:/data/data/com.termux/files/usr/lib:/data/data/com.termux/files/home/install/lib LD_PRELOAD=/data/data/com.termux/files/usr/lib/libtermux-exec.so Installed termux plugins: com.termux.widget versionCode:13 com.termux.x11 versionCode:14 com.termux.api versionCode:51 com.termux.window versionCode:15 com.termux.styling versionCode:1000 ``` ### OS Linux ### GPU Other ### CPU Other ### Ollama version _No response_
bug
low
Critical
2,616,236,502
kubernetes
Cronjob's stuck job was not marked as failed and blocked new schedules
### What happened? I have a cronjob that runs every 5 minutes, concurrency policy is forbidden: ``` apiVersion: batch/v1 kind: CronJob metadata: name: backup-job spec: concurrencyPolicy: Forbid startingDeadlineSeconds: 200 schedule: "*/5 * * * *" jobTemplate: spec: backoffLimit: 0 activeDeadlineSeconds: 240 ...... spec: restartPolicy: Never activeDeadlineSeconds: 240 ``` I found that a job that has been there for 84 minutes when I was checking: ``` $ kubectl get jobs -n zzz NAME COMPLETIONS DURATION AGE xxxxx-job-28833075 0/1 84m 84m ``` However when I try to list pods I got nothing: ``` $ kubectl get pods --selector=job-name=xxxxx-job-28833075 -n zzz No resources found in zzz namespace ``` I think the reason was one of the sidecar containers failed to start, however since all pods owned by this job were deleted so I could not debug into this job, my guess is because `.spec.jobTemplate.spec.activeDeadlineSeconds` is set, when job runs > 240 seconds all pods owned by the job are deleted, but in this case should not the job be marked as failed status? Why this job was actually failed but its status was neither completion nor failed? Also, this stuck job causes 2 following schedules missed, I don't understand since with `.spec.jobTemplate.spec.activeDeadlineSeconds` is set to 4 minutes, should controller schedule a new job after this job was in active status for 4 minutes? ### What did you expect to happen? If any pods/containers owned by a job failed to run, the job should be marked as failed status and all pods/container should be kept (since by default `failedJobsHistoryLimit` is `1`). ### How can we reproduce it (as minimally and precisely as possible)? I am not sure if it is expected behavior? ### Anything else we need to know? _No response_ ### Kubernetes version v1.28.13-eks-a737599 ### Cloud provider <details> </details> AWS ### OS version <details> ```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
kind/bug,sig/apps,triage/needs-information,needs-triage
low
Critical
2,616,248,137
svelte
Svelte component won't type with runes
### Describe the bug If I try to use bind:this to a Svelte component in my project, the typing for the variable tosses this error when I try to type it using the component name: 'Nested' refers to a value, but is being used as a type here. Did you mean 'typeof Nested'?ts(2749) This worked in svelte 4 and typeof causes other issues since the component is truly the type. Am I using wrong types? Is there something new needed in svelte 5? The code compiles but I'd rather not have this red in my code. I noticed some files in my project didn't have issues but it looks like those files didn't migrate to use runes (only had eventDispatchers, which don't migrate). The ones which added $state or $props seem to show the symptom. ### Reproduction https://svelte.dev/playground/6b78d2d389534d72b46ecc02e52ff7bd?version=5.1.3 This is the idea but the REPL doesn't really show the problem. I used `npx sv create my-app` to create a sveltekit minimal using typescript syntax project, no extras. Then I placed the code in App.svelte in routes/+page.svelte and added Nested.svelte in the same folder. It tosses this error on the let nestedEl line in +page.svelte: 'Nested' refers to a value, but is being used as a type here. Did you mean 'typeof Nested'?ts(2749) If I change to typeof Nested, the line is fine but anything else using the variable goes bad. ### Logs _No response_ ### System Info ```shell System: OS: macOS 15.0.1 CPU: (8) arm64 Apple M1 Memory: 94.25 MB / 8.00 GB Shell: 5.9 - /bin/zsh Binaries: Node: 20.11.1 - ~/.nvm/versions/node/v20.11.1/bin/node npm: 10.9.0 - ~/.nvm/versions/node/v20.11.1/bin/npm Browsers: Brave Browser: 129.1.70.126 Chrome: 130.0.6723.70 Safari: 18.0.1 npmPackages: svelte: ^5.0.0 => 5.1.3 ``` ### Severity annoyance
documentation,types / typescript
low
Critical
2,616,262,781
rust
Syntactically rejecting impl-Trait inside non-final path segments & inside fn ptr types is futile
This concerns both universal and existential impl-Trait. Since #48084 (2018) we reject impl-Trait inside qselves and non-final path segments (during AST ~~validation~~ lowering (since #132214)). Since #45918 (2017) we reject impl-Trait inside fn ptr types (during AST lowering). However, both checks are purely syntactic / syntax-driven as they happen before HIR analysis[^1]. Therefore we can simply circumvent them by introducing indirection via *type aliases*. Examples: ```rs fn proj_selfty_e_neg() -> <impl Sized as Mirror>::Image {} // 🔴 REJECTED: `impl Trait` is not allowed in path parameters fn proj_selfty_e_pos() -> MirrorImage<impl Sized> {} // 🟢 Workaround: ACCEPTED fn proj_selfty_u_neg(_: <impl Sized as Mirror>::Image) {} // 🔴 REJECTED: `impl Trait` is not allowed in path parameters fn proj_selfty_u_pos(_: MirrorImage<impl Sized>) {} // 🟢 Workaround: ACCEPTED fn qself_trait_u_neg(_: <() as Carry<impl Sized>>::Project) {} // 🔴 REJECTED: `impl Trait` is not allowed in path parameters fn qself_trait_u_pos(_: Project<impl Sized>) {} // 🟢 Workaround: ACCEPTED // <> under `#![feature(inherent_associated_types)]` only fn non_final_seg_u_neg(_: Transp<impl Sized>::InhProject) {} // 🔴 REJECTED: `impl Trait` is not allowed in path parameters fn non_final_seg_u_pos(_: InhProject<impl Sized>) {} // 🟢 Workaround: ACCEPTED // </> fn fn_ptr0_e_neg() -> fn() -> impl Sized { || {} } // 🔴 REJECTED: `impl Trait` is not allowed in `fn` pointer return types fn fn_ptr0_e_pos() -> FnPtrOut<impl Sized> { || {} } // 🟢 Workaround: ACCEPTED fn fn_ptr1_e_neg() -> fn(impl Sized) { |()| {} } // 🔴 REJECTED: `impl Trait` is not allowed in `fn` pointer parameters fn fn_ptr1_e_pos() -> FnPtrIn<impl Sized> { |()| {} } // 🟢 Workaround: ACCEPTED fn fn_ptr_u_neg(_: fn(impl Sized)) {} // 🔴 REJECTED: `impl Trait` is not allowed in `fn` pointer parameters fn fn_ptr_u_pos(_: FnPtrIn<impl Sized>) {} // 🟢 Workaround: ACCEPTED ////////////////////////////////////////////////// trait Mirror { type Image; } impl<T> Mirror for T { type Image = T; } type MirrorImage<T> = <T as Mirror>::Image; trait Carry<T> { type Project; } impl<T> Carry<T> for () { type Project = (); } type Project<T> = <() as Carry<T>>::Project; // <> under `#![feature(inherent_associated_types)]` only struct Transp<T>(T); impl<T> Transp<T> { type InhProject = (); } type InhProject<T> = Transp<T>::InhProject; // </> type FnPtrOut<O> = fn() -> O; type FnPtrIn<I> = fn(I); ``` --- This calls into question the very existence of these checks. Should we just remove them? Are they historical remnants? Or should we keep them to prevent users from shooting themselves into the foot? Some of these types are actually quite useful (e.g., the `fn() -> impl Sized` as seen in `fn_ptr0_e_neg`) but a lot of them are not at all what you want. Some of them are completely useless (by containing unredeemably uninferable types). I don't think any of them lead to soundness issues. [^1]: They could've been syntactic modulo (eager) type alias expansion (≠ normalization).
T-compiler,A-impl-trait,C-bug,T-types
low
Minor
2,616,274,084
excalidraw
feature request: add support for google fonts
null
enhancement
low
Minor
2,616,279,412
tauri
[bug] IPC fails in non-http protocol windows
### Describe the bug When a window is created using a non-http protocol (data:text/html,) the value of Origin is empty when calling the rust backend command. However, Origin is required when parsing the call request. https://github.com/tauri-apps/tauri/blob/7a1a3276c49320162300fbcfecec9f6a948d65e8/crates/tauri/src/ipc/protocol.rs#L503-L511 ### Reproduction Create a window using a non-http protocol (data:text/html,) ```rust let data_url = Url::parse("data:text/plain,Hello?World#").unwrap(); let _ = tauri::WebviewWindowBuilder::new(&app_handle, "example", tauri::WebviewUrl::CustomProtocol(data_url)).build(); ``` Then the front end calls the rust backend command ```javascript const {open} = window.__TAURI__.shell; open("https://github.com"); ``` ### Expected behavior _No response_ ### Full `tauri info` output ```text [✔] Environment - OS: Windows 10.0.22635 x86_64 (X64) ✔ WebView2: 130.0.2849.52 ✔ MSVC: Visual Studio Community 2022 ✔ rustc: 1.82.0 (f6e511eec 2024-10-15) ✔ cargo: 1.82.0 (8f40fc59f 2024-08-21) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN) - node: 16.20.2 - pnpm: 8.14.0 - npm: 8.19.4 [-] Packages - tauri 🦀: 2.0.6 - tauri-build 🦀: 2.0.2 - wry 🦀: 0.46.2 - tao 🦀: 0.30.3 - tauri-cli 🦀: 2.0.1 [-] Plugins - tauri-plugin-fs 🦀: 2.0.2 - tauri-plugin-autostart 🦀: 2.0.1 - tauri-plugin-shell 🦀: 2.0.2 - tauri-plugin-process 🦀: 2.0.1 - tauri-plugin-single-instance 🦀: 2.0.1 - tauri-plugin-notification 🦀: 2.0.1 - tauri-plugin-http 🦀: 2.0.2 [-] App - build-type: bundle - CSP: unset - frontendDist: ../ui/dist - devUrl: http://localhost:8081/ ``` ### Stack trace _No response_ ### Additional context _No response_
type: bug,status: needs triage
low
Critical
2,616,304,220
ui
[bug]: Unable to install any component when a folder is restricted in the project
### Describe the bug I'm using docker-compose to run a postgres db in my project. This creates a postgres_data folder in my project that is causing a permission denied error when attempting to install any component. ### Affected component/components All ### How to reproduce 1. Configure docker-compose.yaml file in a folder called database at the root of your project, here's my file: `services: postgres: image: postgres:latest container_name: postgres restart: always environment: POSTGRES_DB: <dbname here> POSTGRES_USER: <username here> POSTGRES_PASSWORD: <password here> ports: - "5432:5432" volumes: - ./postgres_data:/var/lib/postgresql/data` 2. Start the container by running `docker-compose up -d` 3. Attempt to install any shadcn-ui component ### Codesandbox/StackBlitz link https://stackblitz.com/edit/stackblitz-starters-battev?file=database%2Fdocker-compose.yaml ### Logs ```bash npx shadcn@latest add accordion ✔ Checking registry. ✔ Updating tailwind.config.ts ✔ Installing dependencies. ⠋ Updating files. Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. EACCES: permission denied, scandir '/home/user/Development/project/database/postgres_data' ``` ### System Info ```bash Arch Linux Kernel Linux 6.11.5-arch1-1 x86_64 Node v20.18.0 package.json { "name": "project", "version": "0.1.0", "private": true, "scripts": { "dev": "next dev --experimental-https", "build": "next build", "start": "next start", "lint": "next lint", "generate": "drizzle-kit generate", "migrate": "drizzle-kit migrate", "studio": "drizzle-kit studio" }, "dependencies": { "@auth/drizzle-adapter": "^1.5.2", "@radix-ui/react-accordion": "^1.2.1", "@radix-ui/react-dialog": "^1.1.2", "@radix-ui/react-dropdown-menu": "^2.1.1", "@radix-ui/react-label": "^2.1.0", "@radix-ui/react-separator": "^1.1.0", "@radix-ui/react-slot": "^1.1.0", "@radix-ui/react-tooltip": "^1.1.3", "class-variance-authority": "^0.7.0", "clsx": "^2.1.1", "dotenv": "^16.4.5", "drizzle-orm": "^0.33.0", "lucide-react": "^0.445.0", "next": "14.2.13", "next-auth": "^5.0.0-beta.21", "next-themes": "^0.3.0", "postgres": "^3.4.4", "react": "^18", "react-dom": "^18", "tailwind-merge": "^2.5.2", "tailwindcss-animate": "^1.0.7" }, "devDependencies": { "@types/node": "^20", "@types/react": "^18", "@types/react-dom": "^18", "drizzle-kit": "^0.24.2", "eslint": "^8", "eslint-config-next": "14.2.13", "postcss": "^8", "tailwindcss": "^3.4.1", "typescript": "^5" } } ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,616,304,727
tauri
[bug] Inconsistent `register_uri_scheme_protocol` behaviour on Windows. Connection refused for requests from iframe.
### Describe the bug **Note: The error only happens on Windows.** ## What I was trying to do I was trying to load a static website from local file system into a tauri app using iframe (for a plugin system). To achieve this, I implemented a custom protocol with `register_uri_scheme_protocol`. What I did was basically implementing a file server to serve static html, css, js files (so I don't need to spawn another http file server). ## How It Works It works like this ```tsx import { convertFileSrc } from "@tauri-apps/api/core"; const iframeSrc = convertFileSrc("", "ext"); <iframe src={iframeSrc} title="iframe"></iframe> ``` This will try to load `index.html` with url - `ext://localhost/` on Mac - `http://ext.localhost/` on Windows The `index.html` loads a `main.js` and a `style.css`. iframe automatically sends requests to fetch the 2 files - ` http://ext.localhost/main.js` - ` http://ext.localhost/style.css` `style.css` sets text color to red. `main.js` changes text from `External JavaScript File Not Loaded` to `External JavaScript Loaded`. ## Expected Behavior On Mac, everything works perfectly ![image](https://github.com/user-attachments/assets/83bc93cd-310b-4d68-8974-533ecd9813b5) ## Problem On Windows, only `index.html` is loaded. `style.css` and `main.js` fail to load. ![image](https://github.com/user-attachments/assets/f9c2b522-43bd-43c7-bc7b-c66070fc5853) In the console, there are 2 errors ![image](https://github.com/user-attachments/assets/1a301f8f-0f31-4c66-8398-2c84748bfb16) The requests to fetch `style.css` and `main.js` got connection refused. ![image](https://github.com/user-attachments/assets/3e5d796a-00fc-4944-89bb-a50b596c92d7) The 2 requests timed out after 2 seconds, while the first request to `index.html` resolves in 1ms. ### Here are more info about the failed request ![image](https://github.com/user-attachments/assets/755d3d56-a8c6-4ec3-a9cb-24214f789381) ![image](https://github.com/user-attachments/assets/216ce823-efb8-4597-b7dd-62d296a8d81c) ### What a successful request look like ![image](https://github.com/user-attachments/assets/6d839e8d-6691-4b9d-aa0b-45ebfef0fc29) ![image](https://github.com/user-attachments/assets/c6742cf9-29a5-4165-9e20-e6110db56d84) ## My Hypothesis I thought there was something wrong with the implementation of the custom file server I implemented with Rust and `tauri::http`. However, fetching `main.js` from Tauri app works perfectly. ```js fetch("http://ext.localhost/main.js") .then((res) => res.text()) .then((text) => { console.log(text); }); ``` And from the log message in my custom file server rust code, I can see that, there was only request to `index.html`, no request for `style.css` or `main.js` at all. So tauri core basically denies the connection from iframe, but not from the window. I don't know what the difference is between request from window and iframe, but this behavior is completely different from MacOS. ### Reproduction https://github.com/[HuakunShen/tauri-windows-uri-scheme-bug To run it, ```ps pnpm install pnpm tauri dev ``` > [!CAUTION] > Error only happens on Windows. It should work on Mac. ### Expected behavior _No response_ ### Full `tauri info` output ```text [✔] Environment - OS: Windows 10.0.22631 x86_64 (X64) ✔ WebView2: 129.0.2792.89 ✔ MSVC: - Visual Studio Build Tools 2019 - Visual Studio Community 2022 ✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06) ✔ cargo: 1.80.1 (376290515 2024-07-16) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default) - node: 20.17.0 - pnpm: 9.9.0 - npm: 10.8.2 - bun: 1.1.33 - deno: deno 2.0.0 [-] Packages - tauri 🦀: 2.0.6 - tauri-build 🦀: 2.0.2 - wry 🦀: 0.46.3 - tao 🦀: 0.30.3 - @tauri-apps/api : 2.0.3 - @tauri-apps/cli : 2.0.5 [-] Plugins - tauri-plugin-shell 🦀: 2.0.2 - @tauri-apps/plugin-shell : 2.0.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../build - devUrl: http://localhost:1420/ - framework: Svelte - bundler: Vite ``` ### Stack trace _No response_ ### Additional context This is very critical to my app, please help! Thanks!
type: bug,platform: Windows,status: needs triage
low
Critical
2,616,368,660
pytorch
fatal error C1083: 无法打开包括文件: “cstddef”: No such file or directory error on windows
### 🐛 Describe the bug https://github.com/microsoft/DeepSpeed/issues/6673 try install deepspeed . on torch 2.5.0-cuda then running build_ext ```error D:\my\env\python3.10.10\lib\site-packages\torch\utils\cpp_extension.py:416: UserWarning: The detected CUDA version (12.5) has a minor version mismatch with the version that was used to compile PyTorch (12.4). Most likely this shouldn't be a problem. warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda)) building 'deepspeed.ops.adam.fused_adam_op' extension "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\bin\Hostx64\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -ID:\my\work\study\ai\DeepSpeed\csrc\includes -ID:\my\work\study\ai\DeepSpeed\csrc\adam -ID:\my\env\python3.10.10\lib\site-packages\torch\include -ID:\my\env\python3.10.10\lib\site-packages\torch\include\torch\csrc\api\include -ID:\my\env\python3.10.10\lib\site-packages\torch\include\TH -ID:\my\env\python3.10.10\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.5\include" -ID:\my\env\python3.10.10\include -ID:\my\env\python3.10.10\Include /EHsc /Tpcsrc/adam/fused_adam_frontend.cpp /Fobuild\temp.win-amd64-cpython-310\Release\csrc/adam/fused_adam_frontend.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc -O2 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=fused_adam_op -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++17 fused_adam_frontend.cpp D:\my\env\python3.10.10\lib\site-packages\torch\include\c10/core/DeviceType.h(10): fatal error C1083: 无法打开包括文件: “cstddef”: No such file or directory error: command 'D:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.41.34120\\bin\\Hostx64\\x64\\cl.exe' failed with exit code 2 ``` ```info ### Versions Collecting environment information... PyTorch version: 2.5.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 专业版 (10.0.22631 64 位) GCC version: (x86_64-posix-seh-rev0, Built by MinGW-Builds project) 13.2.0 Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: True CUDA runtime version: 12.5.40 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 Nvidia driver version: 560.94 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Name: 13th Gen Intel(R) Core(TM) i9-13900KS Manufacturer: GenuineIntel Family: 207 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 3200 MaxClockSpeed: 3200 L2CacheSize: 32768 L2CacheSpeed: None Revision: None Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] torch==2.5.0+cu124 [pip3] torchaudio==2.5.0+cu124 [pip3] torchvision==0.20.0+cu124 [pip3] triton==2.1.0 [pip3] vector-quantize-pytorch==1.14.24 [conda] Could not collect ``` cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @malfet @zou3519 @xmfan @jbschlosser
needs reproduction,module: windows,module: cpp-extensions,module: cpp,triaged
low
Critical
2,616,403,720
rust
Enabling `keyword_idents` lints doesn't warn about violations in other files
Consider this tiny two-file crate: ```rust // main.rs #![deny(keyword_idents)] mod submod; fn main() {} ``` ```rust // submod.rs fn async() {} fn await() {} fn try() {} fn dyn() {} fn gen() {} ``` Building this crate should fail with several violations of the `keyword_idents_2018` and `keyword_idents_2024` lints. But instead, building unexpectedly succeeds! (This particular example requires Rust 2015, but the same issue can be observed in 2018/2021 regarding the `gen` keyword only.)
A-lints,T-compiler,C-bug
low
Major
2,616,412,685
pytorch
Dependency management corrupted in pytorch 2.5.0
### 🐛 Describe the bug ### Expected behavior When using a project still using python 3.8, the version of Pytorch used should be smaller than `2.5.0`. ```toml [tool.poetry.dependencies] python = ">=3.8" ... torch = ">=2.0.0, !=2.0.1, !=2.1.0" ``` ## Bug description Nevertheless, the installer (in this case `poerty`) will return an error verbosing the following: ```shell | - Installing pyparsing (3.1.4) | - Installing scikit-learn (1.3.2) | - Installing torch (2.5.0) | | RuntimeError | | Unable to find installation candidates for torch (2.5.0) | | at /opt/poetry/venv/lib/python3.8/site-packages/poetry/installation/chooser.py:74 in choose_for | 70│ | 71│ links.append(link) | 72│ | 73│ if not links: | → 74│ raise RuntimeError(f"Unable to find installation candidates for {package}") | 75│ | 76│ # Get the best link | 77│ chosen = max(links, key=lambda link: self._sort_key(package, link)) | 78│ | | Cannot install torch. ``` ### Versions This happened on a shared runner with Poetry version `1.6.X`, I was not able to reproduce it locally. <img width="266" alt="image" src="https://github.com/user-attachments/assets/3765e225-54e3-4564-ac90-b3d1703f7c13"> cc @seemethere @malfet @osalpekar @atalman
module: binaries,triaged
low
Critical
2,616,415,925
ollama
ollama.com search no longer paginates
### What is the issue? If you search for models on the website, only one page is displayed. You can no longer find all the models. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.14
bug,ollama.com
low
Minor
2,616,423,679
ant-design
增强 Row/Col 布局组件以支持 ContainerQuery
### What problem does this feature solve? 目前 Row/Col 的布局虽说解决了些问题,但是比较局限和鸡肋,为什么呢?目前的响应式是根据整个屏幕的宽度来的,这个对于设计网站首页或以整个屏幕为基准的界面设计是没问题的,但是对于中后台页面不怎么适用,比如对于增删改查的表单,一般都是弹框式的,而这个弹框的宽度肯定只是有限的(一般小于屏幕的宽度,大于在40%-70%的屏幕宽度之间),而此时如果在Dialog中的表单上使用Row/Col 布局组件显然响应式断点整体的宽度应该相对于Dialog的宽度而不是整个屏幕的宽度。 ### What does the proposed API look like? 可以在Row组件上增加一个属性:`responsiveMode: 'screen' | 'container'` 来控制响应式基于的对象模式(是屏幕宽度还是父级容器宽度)来解决这个问题。 这里可能还需要增加一个配置属性来配置 不同响应式断点的 **宽度**,虽说相对于屏幕的响应式断点可以通过ConfigProvider的形式全局配置,但是相对于容器的响应式断点就只能每次使用时单独配置(因为具体的父级容器宽度无法做事先的假定) <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
💡 Feature Request,👷🏻‍♂️ Someone working on it
low
Major
2,616,452,036
next.js
Turbopack Not Compiling Tailwind CSS Changes in Safari
### Link to the code that reproduces this issue https://github.com/infinity-atom42/nextjs-turbopack-bug ### To Reproduce 1. **Initialize a New Next.js Project:** ```bash npx create-next-app@15.0.1 . ``` 2. **Configure Project Settings:** Respond with **Yes** to all prompts except for customizing the import alias. - **TypeScript:** Yes - **ESLint:** Yes - **Tailwind CSS:** Yes - **`src/` Directory:** Yes - **App Router:** Yes - **Turbopack for Next Dev:** Yes - **Customize Import Alias:** No 3. **Start the Development Server:** ```bash npm run dev ``` 4. **Modify `layout.tsx`:** - **Comment Out Google Fonts:** ```tsx:src/app/layout.tsx // const geistSans = localFont({ // src: "./fonts/GeistVF.woff", // variable: "--font-geist-sans", // weight: "100 900", // }); // const geistMono = localFont({ // src: "./fonts/GeistMonoVF.woff", // variable: "--font-geist-mono", // weight: "100 900", // }); ``` - **Comment Out `className` in `body`:** ```tsx:src/app/layout.tsx <body // className={`${geistSans.variable} ${geistMono.variable} antialiased`} > {children} </body> ``` 5. **Access the Application in Safari or Safari Technology Preview:** Open `http://localhost:3000` in Safari. 6. **Observe the Issue:** Tailwind CSS styles are not applied to the application. ### Current vs. Expected behavior ## Expected Behavior Tailwind CSS should process and apply styles correctly, irrespective of the inclusion of Google Fonts, when using Turbopack for development. The styles should render properly in all supported browsers, including Safari. ## Actual Behavior After removing Google Fonts and utilizing Turbopack in Next.js v15.0.1, Tailwind CSS fails to process styles **only** in Safari and Safari Technology Preview browsers. The issue does not manifest in other browsers or when Turbopack is disabled. ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:36:26 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T8103 Available memory (MB): 8192 Available CPU cores: 8 Binaries: Node: 22.9.0 npm: 10.9.0 Yarn: N/A pnpm: 9.12.2 Relevant Packages: next: 15.0.1 // Latest available version is detected (15.0.1). eslint-config-next: 15.0.1 react: 19.0.0-rc-69d4b800-20241021 react-dom: 19.0.0-rc-69d4b800-20241021 typescript: 5.6.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Turbopack ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context ### Current `layout.tsx` File <img width="1440" alt="Screenshot 2024-10-27 at 11 49 10" src="https://github.com/user-attachments/assets/6f1d514b-6b20-47e7-8d7b-f320979ac5dc"> <img width="1440" alt="Screenshot 2024-10-27 at 11 49 17" src="https://github.com/user-attachments/assets/f0916ef2-44f0-4cac-9ae7-d7eb9eaa7e42">
Turbopack,linear: turbopack,CSS
medium
Critical
2,616,519,968
transformers
Beit image classification have different results compared from versions prior to 4.43.0
### System Info - `transformers` version: 4.43.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.9 - Huggingface_hub version: 0.26.1 - Safetensors version: 0.4.5 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes - GPU type: NVIDIA GeForce RTX 3060 Ti ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Given the following image: ![image](https://github.com/user-attachments/assets/4c4cf99d-e5fc-40ff-adaf-13d3c1b3d337) Running the following pipeline for versions prior to `4.43.0` (4.42.4) ```py from PIL import Image from transformers import pipeline import transformers pipeline_aesthetic = pipeline( "image-classification", "cafeai/cafe_aesthetic", device=0 ) with Image.open("F:\\Downloads\\Tower.jpg") as img: predictions = pipeline_aesthetic(img, top_k=2) predict_keyed = {} for p in predictions: # print(type(p)) if not isinstance(p, dict): raise Exception("Prediction value is missing?") predict_keyed[p["label"]] = p["score"] print(predict_keyed,transformers.__version__) ``` For 4.42.4, it returns: ``` {'aesthetic': 0.651885986328125, 'not_aesthetic': 0.3481140434741974} 4.42.4 ``` For 4.43.0: ``` {'aesthetic': 0.43069663643836975, 'not_aesthetic': 0.2877475321292877} 4.43.0 ``` ### Expected behavior Expected results from 4.42.4 instead of 4.43.0. ### Addn Notes. I narrowed it down to this commit being the cause: https://github.com/huggingface/transformers/blob/06fd7972acbc6a5e9cd75b4d482583c060ac2ed0/src/transformers/models/beit/modeling_beit.py but unsure where exactly it is changed.
bug,Vision
low
Major
2,616,534,121
neovim
Document how to defend against side effects of `nvim_get_option_value()` with `filetype` option
### Problem I found an attempt to call `vim.lsp.semantic_tokens.start()` on nonexisting buffer and then found that it was a dummy buffer created by `vim.api.nvim_get_option_value('lisp', { filetype = 'lua' })` called by some plugin. The following part https://github.com/neovim/neovim/blob/25b53b593ef6f229fbec5b3dc205a7539579d13a/src/nvim/api/options.c#L103-L131 shows how such buffer is created. The user may not know about such workarounds. On the other side it is a documented behavior, not a bug. ### Expected behavior In this case it would be good to include some hint how to detect such buffer and filter it out in `FileType` autocommand (probably within `:h FileType`). I have no better idea than using something like ```lua local function isDummyFileTypeBuffer(buf) local opts = vim.bo[buf] return opts.buftype == 'nofile' and opts.bufhidden == 'hide' and opts.swapfile == false and opts.modeline == false end vim.api.nvim_create_autocmd('FileType', { pattern = '*', callback = function (args) if args.file == args.match and isDummyFileTypeBuffer(args.buf) then return end vim.print(args) end, }) vim.api.nvim_get_option_value('lisp', {filetype = 'php'}) ``` but I am aware that it depends on probability that there is no other such buffer (and on undocumented internals).
enhancement,documentation,api,lsp
low
Critical
2,616,550,543
rust
HashMap: panic in element destructor causes leaks of unrelated elements
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> When dropping a `HashMap` (or `HashSet`) and an element's destructor panics, then all elements that would be dropped after are leaked. This is inconsistent with other std collections (`Vec`, `LinkedList`, `BTreeMap`), where after panic in one destructor the remaining destructors are still called, potentially causing an abort if another one panics. I tried this code: [playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=1301b5c61b0a84225da1c7db65427e1e) ```rust use std::cell::Cell; use std::collections::{BTreeMap, HashMap, LinkedList}; use std::panic::{catch_unwind, AssertUnwindSafe}; struct Dropper<'a>(&'a Cell<u32>); impl Drop for Dropper<'_> { fn drop(&mut self) { let count = self.0.get(); self.0.set(count + 1); if count == 0 { panic!("oh no"); } } } fn main() { // Vec let count = Cell::new(0); catch_unwind(AssertUnwindSafe(|| { drop(vec![[Dropper(&count), Dropper(&count)]]); })) .unwrap_err(); println!("vec: {}", count.get()); // LinkedList let count = Cell::new(0); catch_unwind(AssertUnwindSafe(|| { drop(LinkedList::from([Dropper(&count), Dropper(&count)])); })) .unwrap_err(); println!("linked list: {}", count.get()); // BTreeMap let count = Cell::new(0); catch_unwind(AssertUnwindSafe(|| { drop(BTreeMap::from([(1, Dropper(&count)), (2, Dropper(&count))])); })) .unwrap_err(); println!("b-tree map: {}", count.get()); // HashMap let count = Cell::new(0); catch_unwind(AssertUnwindSafe(|| { drop(HashMap::from([(1, Dropper(&count)), (2, Dropper(&count))])); })) .unwrap_err(); println!("hash map: {}", count.get()); } ``` I expected to see this happen: The drop behavior of `Vec`, `LinkedList`, `BTreeMap` and `HashMap` should be consistent. Instead, this happened: `HashMap` drops only one element if the destructor unwinds, but the others drop both elements. <details> <summary>running Miri on the code shows a memory leak</summary> ```text error: memory leaked: alloc17016 (Rust heap, size: 76, align: 8), allocated here: --> /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/alloc.rs:15:15 | 15 | match alloc.allocate(layout) { | ^^^^^^^^^^^^^^^^^^^^^^ | = note: BACKTRACE: = note: inside `hashbrown::raw::alloc::inner::do_alloc::<std::alloc::Global>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/alloc.rs:15:15: 15:37 = note: inside `hashbrown::raw::RawTableInner::new_uninitialized::<std::alloc::Global>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:1534:38: 1534:61 = note: inside `hashbrown::raw::RawTableInner::fallible_with_capacity::<std::alloc::Global>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:1572:30: 1572:96 = note: inside `hashbrown::raw::RawTableInner::prepare_resize::<std::alloc::Global>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:2633:13: 2633:94 = note: inside `hashbrown::raw::RawTableInner::resize_inner::<std::alloc::Global>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:2829:29: 2829:86 = note: inside `hashbrown::raw::RawTableInner::reserve_rehash_inner::<std::alloc::Global>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:2719:13: 2725:14 = note: inside `hashbrown::raw::RawTable::<(i32, Dropper<'_>)>::reserve_rehash::<{closure@hashbrown::map::make_hasher<i32, Dropper<'_>, std::hash::RandomState>::{closure#0}}>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:1045:13: 1056:14 = note: inside `hashbrown::raw::RawTable::<(i32, Dropper<'_>)>::reserve::<{closure@hashbrown::map::make_hasher<i32, Dropper<'_>, std::hash::RandomState>::{closure#0}}>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/raw/mod.rs:993:20: 994:81 = note: inside `hashbrown::map::HashMap::<i32, Dropper<'_>, std::hash::RandomState>::reserve` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/map.rs:1102:9: 1103:77 = note: inside `<hashbrown::map::HashMap<i32, Dropper<'_>, std::hash::RandomState> as std::iter::Extend<(i32, Dropper<'_>)>>::extend::<[(i32, Dropper<'_>); 2]>` at /playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hashbrown-0.15.0/src/map.rs:4489:9: 4489:30 = note: inside `<std::collections::HashMap<i32, Dropper<'_>> as std::iter::Extend<(i32, Dropper<'_>)>>::extend::<[(i32, Dropper<'_>); 2]>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/collections/hash/map.rs:3185:9: 3185:31 = note: inside `<std::collections::HashMap<i32, Dropper<'_>> as std::iter::FromIterator<(i32, Dropper<'_>)>>::from_iter::<[(i32, Dropper<'_>); 2]>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/collections/hash/map.rs:3170:9: 3170:25 = note: inside `<std::collections::HashMap<i32, Dropper<'_>> as std::convert::From<[(i32, Dropper<'_>); 2]>>::from` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/collections/hash/map.rs:1405:9: 1405:29 note: inside closure --> src/main.rs:45:14 | 45 | drop(HashMap::from([(1, Dropper(&count)), (2, Dropper(&count))])); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = note: inside `<{closure@src/main.rs:44:35: 44:37} as std::ops::FnOnce<()>>::call_once - shim` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5: 250:71 = note: inside `<std::panic::AssertUnwindSafe<{closure@src/main.rs:44:35: 44:37}> as std::ops::FnOnce<()>>::call_once` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/panic/unwind_safe.rs:272:9: 272:19 = note: inside `std::panicking::r#try::do_call::<std::panic::AssertUnwindSafe<{closure@src/main.rs:44:35: 44:37}>, ()>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:557:40: 557:43 = note: inside `std::panicking::r#try::<(), std::panic::AssertUnwindSafe<{closure@src/main.rs:44:35: 44:37}>>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:520:19: 520:88 = note: inside `std::panic::catch_unwind::<std::panic::AssertUnwindSafe<{closure@src/main.rs:44:35: 44:37}>, ()>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:358:14: 358:33 note: inside `main` --> src/main.rs:44:5 | 44 | / catch_unwind(AssertUnwindSafe(|| { 45 | | drop(HashMap::from([(1, Dropper(&count)), (2, Dropper(&count))])); 46 | | })) | |_______^ ``` </details> ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.84.0-nightly (c1db4dc24 2024-10-25) binary: rustc commit-hash: c1db4dc24267a707409c9bf2e67cf3c7323975c8 commit-date: 2024-10-25 host: x86_64-unknown-linux-gnu release: 1.84.0-nightly LLVM version: 19.1.1 ``` @rustbot label T-libs A-collections A-destructors I-memleak
A-destructors,A-collections,T-libs,C-discussion,I-memleak
low
Critical
2,616,569,118
flutter
[url_launcher] Link widget messes up TAB traversal on web
### What package does this bug report belong to? url_launcher ### What target platforms are you seeing this bug on? Web ### Have you already upgraded your packages? Yes ### Dependency versions <details><summary>pubspec.lock</summary> ```lock # Generated by pub # See https://dart.dev/tools/pub/glossary#lockfile packages: async: dependency: transitive description: name: async sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c" url: "https://pub.dev" source: hosted version: "2.11.0" boolean_selector: dependency: transitive description: name: boolean_selector sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66" url: "https://pub.dev" source: hosted version: "2.1.1" characters: dependency: transitive description: name: characters sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605" url: "https://pub.dev" source: hosted version: "1.3.0" clock: dependency: transitive description: name: clock sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf url: "https://pub.dev" source: hosted version: "1.1.1" collection: dependency: transitive description: name: collection sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a url: "https://pub.dev" source: hosted version: "1.18.0" cupertino_icons: dependency: "direct main" description: name: cupertino_icons sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6 url: "https://pub.dev" source: hosted version: "1.0.8" fake_async: dependency: transitive description: name: fake_async sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78" url: "https://pub.dev" source: hosted version: "1.3.1" flutter: dependency: "direct main" description: flutter source: sdk version: "0.0.0" flutter_lints: dependency: "direct dev" description: name: flutter_lints sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c" url: "https://pub.dev" source: hosted version: "4.0.0" flutter_test: dependency: "direct dev" description: flutter source: sdk version: "0.0.0" flutter_web_plugins: dependency: transitive description: flutter source: sdk version: "0.0.0" leak_tracker: dependency: transitive description: name: leak_tracker sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05" url: "https://pub.dev" source: hosted version: "10.0.5" leak_tracker_flutter_testing: dependency: transitive description: name: leak_tracker_flutter_testing sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806" url: "https://pub.dev" source: hosted version: "3.0.5" leak_tracker_testing: dependency: transitive description: name: leak_tracker_testing sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3" url: "https://pub.dev" source: hosted version: "3.0.1" lints: dependency: transitive description: name: lints sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235" url: "https://pub.dev" source: hosted version: "4.0.0" matcher: dependency: transitive description: name: matcher sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb url: "https://pub.dev" source: hosted version: "0.12.16+1" material_color_utilities: dependency: transitive description: name: material_color_utilities sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec url: "https://pub.dev" source: hosted version: "0.11.1" meta: dependency: transitive description: name: meta sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7 url: "https://pub.dev" source: hosted version: "1.15.0" path: dependency: transitive description: name: path sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af" url: "https://pub.dev" source: hosted version: "1.9.0" plugin_platform_interface: dependency: transitive description: name: plugin_platform_interface sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02" url: "https://pub.dev" source: hosted version: "2.1.8" sky_engine: dependency: transitive description: flutter source: sdk version: "0.0.99" source_span: dependency: transitive description: name: source_span sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c" url: "https://pub.dev" source: hosted version: "1.10.0" stack_trace: dependency: transitive description: name: stack_trace sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b" url: "https://pub.dev" source: hosted version: "1.11.1" stream_channel: dependency: transitive description: name: stream_channel sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7 url: "https://pub.dev" source: hosted version: "2.1.2" string_scanner: dependency: transitive description: name: string_scanner sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde" url: "https://pub.dev" source: hosted version: "1.2.0" term_glyph: dependency: transitive description: name: term_glyph sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84 url: "https://pub.dev" source: hosted version: "1.2.1" test_api: dependency: transitive description: name: test_api sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb" url: "https://pub.dev" source: hosted version: "0.7.2" url_launcher: dependency: "direct main" description: name: url_launcher sha256: "9d06212b1362abc2f0f0d78e6f09f726608c74e3b9462e8368bb03314aa8d603" url: "https://pub.dev" source: hosted version: "6.3.1" url_launcher_android: dependency: transitive description: name: url_launcher_android sha256: "0dea215895a4d254401730ca0ba8204b29109a34a99fb06ae559a2b60988d2de" url: "https://pub.dev" source: hosted version: "6.3.13" url_launcher_ios: dependency: transitive description: name: url_launcher_ios sha256: e43b677296fadce447e987a2f519dcf5f6d1e527dc35d01ffab4fff5b8a7063e url: "https://pub.dev" source: hosted version: "6.3.1" url_launcher_linux: dependency: transitive description: name: url_launcher_linux sha256: e2b9622b4007f97f504cd64c0128309dfb978ae66adbe944125ed9e1750f06af url: "https://pub.dev" source: hosted version: "3.2.0" url_launcher_macos: dependency: transitive description: name: url_launcher_macos sha256: "769549c999acdb42b8bcfa7c43d72bf79a382ca7441ab18a808e101149daf672" url: "https://pub.dev" source: hosted version: "3.2.1" url_launcher_platform_interface: dependency: transitive description: name: url_launcher_platform_interface sha256: "552f8a1e663569be95a8190206a38187b531910283c3e982193e4f2733f01029" url: "https://pub.dev" source: hosted version: "2.3.2" url_launcher_web: dependency: transitive description: name: url_launcher_web sha256: "772638d3b34c779ede05ba3d38af34657a05ac55b06279ea6edd409e323dca8e" url: "https://pub.dev" source: hosted version: "2.3.3" url_launcher_windows: dependency: transitive description: name: url_launcher_windows sha256: "44cf3aabcedde30f2dba119a9dea3b0f2672fbe6fa96e85536251d678216b3c4" url: "https://pub.dev" source: hosted version: "3.1.3" vector_math: dependency: transitive description: name: vector_math sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803" url: "https://pub.dev" source: hosted version: "2.1.4" vm_service: dependency: transitive description: name: vm_service sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d" url: "https://pub.dev" source: hosted version: "14.2.5" web: dependency: transitive description: name: web sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb url: "https://pub.dev" source: hosted version: "1.1.0" sdks: dart: ">=3.5.4 <4.0.0" flutter: ">=3.24.0" ``` </details> ### Steps to reproduce 1. `flutter create link_tab_bug` 2. `cd link_tab_bug` 3. `flutter pub add url_launcher` 4. Replace `main.dart` with the code sample found below. 5. `flutter run -d chrome` ### Expected results A single `TAB` key press is enough to change focus from one `ElevatedButton` wrapped with a `Link` widget to the other. ### Actual results Two `TAB` key presses are needed to "escape" from a button wrapped with a `Link` widget. ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; import 'package:url_launcher/link.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, ), home: const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({super.key, required this.title}); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( backgroundColor: Theme.of(context).colorScheme.inversePrimary, title: Text(widget.title), ), body: Center( child: FocusScope( onKeyEvent: (node, event) { if (event.logicalKey == LogicalKeyboardKey.tab && event is KeyDownEvent) _incrementCounter(); return KeyEventResult.ignored; }, child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ Link( uri: Uri.https('flutter.dev'), builder: (context, followLink) => ElevatedButton( onPressed: followLink, child: const Text('flutter.dev'), ), ), const SizedBox(height: 20), Link( uri: Uri.https('dart.dev'), builder: (context, followLink) => ElevatedButton( onPressed: followLink, child: const Text('dart.dev'), ), ), const SizedBox(height: 20), const Text( 'You have pressed TAB this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headlineMedium, ), ], ), ), ), ); } } ``` </details> ### Screenshots or Videos <details open> <summary>Screenshots / Video demonstration</summary> https://github.com/user-attachments/assets/0efc131b-52f9-4628-a14e-7e39be128411 </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel stable, 3.24.4, on macOS 15.0.1 24A348 darwin-arm64, locale en-US) • Flutter version 3.24.4 on channel stable at /Users/alejandro/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 603104015d (3 days ago), 2024-10-24 08:01:25 -0700 • Engine revision db49896cf2 • Dart version 3.5.4 • DevTools version 2.37.3 [✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /Users/alejandro/Library/Android/sdk • Platform android-34, build-tools 34.0.0 • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231) • All Android licenses accepted. [!] Xcode - develop for iOS and macOS (Xcode 16.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 16A242d ✗ CocoaPods not installed. CocoaPods is a package manager for iOS or macOS platform code. Without CocoaPods, plugins will not work on iOS or macOS. For more info, see https://flutter.dev/to/platform-plugins For installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation [✓] Chrome - develop for the web • CHROME_EXECUTABLE = /Applications/Brave Browser.app/Contents/MacOS/Brave Browser [✓] Android Studio (version 2022.3) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231) [✓] VS Code (version 1.94.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.98.0 [✓] Connected device (3 available) • macOS (desktop) • macos • darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0.1 24A348 darwin-arm64 • Chrome (web) • chrome • web-javascript • Brave Browser 130.1.71.118 [✓] Network resources • All expected network resources are available. ! Doctor found issues in 1 category. ``` </details>
platform-web,p: url_launcher,package,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.27
low
Critical
2,616,573,840
ui
[bug]: facing issue while trying to install shadcn with npx command, working with bun but not working with npx
### Describe the bug Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. Command failed with exit code 1: npm install tailwindcss-animate class-variance-authority lucide-react @radix-ui/react-icons clsx tailwind-merge npm error code ERESOLVE npm error ERESOLVE unable to resolve dependency tree npm error npm error While resolving: cybership-io@0.1.0 npm error Found: react@19.0.0-rc-69d4b800-20241021 npm error node_modules/react npm error react@"19.0.0-rc-69d4b800-20241021" from the root project npm error npm error Could not resolve dependency: npm error peer react@"^16.x || ^17.x || ^18.x" from @radix-ui/react-icons@1.3.0 npm error node_modules/@radix-ui/react-icons npm error @radix-ui/react-icons@"*" from the root project npm error npm error Fix the upstream dependency conflict, or retry npm error this command with --force or --legacy-peer-deps npm error to accept an incorrect (and potentially broken) dependency resolution. npm error npm error npm error For a full report see: npm error C:\Users\Tafsir\AppData\Local\npm-cache\_logs\2024-10-27T12_41_49_804Z-eresolve-report.txt npm error A complete log of this run can be found in: C:\Users\Tafsir\AppData\Local\npm-cache\_logs\2024-10-27T12_41_49_804Z-debug-0.log ### Affected component/components Not installing ### How to reproduce 1. install command given, 2. see error ### Codesandbox/StackBlitz link _No response_ ### Logs ```bash Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. Command failed with exit code 1: npm install tailwindcss-animate class-variance-authority lucide-react @radix-ui/react-icons clsx tailwind-merge npm error code ERESOLVE npm error ERESOLVE unable to resolve dependency tree npm error npm error While resolving: cybership-io@0.1.0 npm error Found: react@19.0.0-rc-69d4b800-20241021 npm error node_modules/react npm error react@"19.0.0-rc-69d4b800-20241021" from the root project npm error npm error Could not resolve dependency: npm error peer react@"^16.x || ^17.x || ^18.x" from @radix-ui/react-icons@1.3.0 npm error node_modules/@radix-ui/react-icons npm error @radix-ui/react-icons@"*" from the root project npm error npm error Fix the upstream dependency conflict, or retry npm error this command with --force or --legacy-peer-deps npm error to accept an incorrect (and potentially broken) dependency resolution. npm error npm error npm error For a full report see: npm error C:\Users\Tafsir\AppData\Local\npm-cache\_logs\2024-10-27T12_41_49_804Z-eresolve-report.txt npm error A complete log of this run can be found in: C:\Users\Tafsir\AppData\Local\npm-cache\_logs\2024-10-27T12_41_49_804Z-debug-0.log ``` ### System Info ```bash Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. Command failed with exit code 1: npm install tailwindcss-animate class-variance-authority lucide-react @radix-ui/react-icons clsx tailwind-merge npm error code ERESOLVE npm error ERESOLVE unable to resolve dependency tree npm error npm error While resolving: cybership-io@0.1.0 npm error Found: react@19.0.0-rc-69d4b800-20241021 npm error node_modules/react npm error react@"19.0.0-rc-69d4b800-20241021" from the root project npm error npm error Could not resolve dependency: npm error peer react@"^16.x || ^17.x || ^18.x" from @radix-ui/react-icons@1.3.0 npm error node_modules/@radix-ui/react-icons npm error @radix-ui/react-icons@"*" from the root project npm error npm error Fix the upstream dependency conflict, or retry npm error this command with --force or --legacy-peer-deps npm error to accept an incorrect (and potentially broken) dependency resolution. npm error npm error npm error For a full report see: npm error C:\Users\Tafsir\AppData\Local\npm-cache\_logs\2024-10-27T12_41_49_804Z-eresolve-report.txt npm error A complete log of this run can be found in: C:\Users\Tafsir\AppData\Local\npm-cache\_logs\2024-10-27T12_41_49_804Z-debug-0.log ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,616,608,536
rust
Missing documentation of `use` bounds
### Location Concerns `rustc --explain E0700`, the Rust Book, and the Reference. ### Summary I originally asked [this StackOverflow question](https://stackoverflow.com/questions/79129687/what-is-rusts-uselifetime-syntax?noredirect=1#comment139529892_79129687) and was advised to file a bug. The new `use` keyword, apparently introduced in 1.82, is poorly documented. There are at least 3 documentation issues: 1. The rust error E0700 tells you to <code>add a `use<...>` bound to explicitly capture `'_`... For more information about this error, try `rustc --explain E0700`.</code> But running this command only talks about the older syntax and doesn't mention `use` at all. 2. The Rust Book doesn't mention this syntax at all, in stable or nightly, as far as I can tell. 3. The Reference nightly edition mentions it in [10.1.16](https://doc.rust-lang.org/nightly/reference/types/impl-trait.html?highlight=precise#precise-capturing) and [10.6](https://doc.rust-lang.org/nightly/reference/trait-bounds.html#use-bounds), but neither of these sections are included in the stable edition. My specific questions about the syntax were answered by [the announcement blog post](https://blog.rust-lang.org/2024/09/05/impl-trait-capture-rules.html), I just didn't know how to find it, because "rust use lifetime" is a pretty terrible set of keywords to google for.
T-compiler,A-docs,T-libs
low
Critical
2,616,614,384
yt-dlp
[Boosty] Support audio-only content
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm requesting a site-specific feature - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region Europe ### Example URLs https://boosty.to/myrngwaur/posts/92f1592c-9fc5-4389-a45e-7e798e4b6d0e ### Provide a description that is worded well enough to be understood Current extractor implementation only supports Boosty content of type `video` and `video_ok`. The example page includes only an audio file and yt-dlp errors out with `ERROR: [Boosty] No videos found`. The file in question has the following format in API output: ```json { "showViewsCounter": false, "timeCode": 0, "id": "eebc5a6a-ea54-4193-826a-0725a8a0c380", "size": 43397364, "uploadStatus": "", "title": "История и источники_ истина где-то рядом.mp4", "url": "https://cdn.boosty.to/audio/eebc5a6a-ea54-4193-826a-0725a8a0c380", "album": "", "complete": true, "artist": "", "type": "audio_file", "viewsCounter": 19, "isMigrated": true, "duration": 3565, "track": "" } ``` The URL is not directly downloadable without some request-signing magic though. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', '--extract-audio', '--audio-format', 'm4a', '--embed-metadata', '--embed-thumbnail', '--cookies', 'boostie.cookie', 'https://boosty.to/myrngwaur/posts/92f1592c-9fc5-4389-a45e-7e798e4b6d0e'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip) [debug] Python 3.12.7 (CPython aarch64 64bit) - Linux-6.10.12-orbstack-00282-gd1783374c25e-aarch64-with (OpenSSL 3.3.2 3 Sep 2024) [debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1 [debug] Optional libraries: mutagen-1.47.0, sqlite3-3.45.3 [debug] Proxy map: {} [debug] Request Handlers: urllib [debug] Loaded 1839 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: stable@2024.10.22 from yt-dlp/yt-dlp yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp) [Boosty] Extracting URL: https://boosty.to/myrngwaur/posts/92f1592c-9fc5-4389-a45e-7e798e4b6d0e [Boosty] 92f1592c-9fc5-4389-a45e-7e798e4b6d0e: Downloading post data ERROR: [Boosty] No videos found File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 741, in extract ie_result = self._real_extract(url) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/bin/yt-dlp/yt_dlp/extractor/boosty.py", line 222, in _real_extract raise ExtractorError('No videos found', expected=True) ```
site-enhancement,triage
low
Critical
2,616,623,048
godot
When duplicating a node with property setters, changing either duplicate's or original's properties temporarily changes the other
### Tested versions v4.3.stable.official.77dcf97d8 and also 4.2 ### System information Linux pop-os 6.9.3-76060903-generic ### Issue description Duplicating a scene with properties, then trying to change those properties, can visually affect the original and duplicate scenes. On reload of the parent scene, only the scene that you modified is actually modified, but this makes editing very annoying if you have to reload the parent scene every time you duplicate something. This occurs even if "Local to scene" is set to true for each of the relevant subnodes in the scene. Maybe the `@onready` variables in the scene don't get duplicated somehow until the parent scene reloads. ### Steps to reproduce 1. Open "world.tscn" in the editor. 2. Duplicate one of the `BoxZ` scenes, e.g., `Box2`. 3. Modify the original `Box2` size property (e.g., set size.y to 7 in the Inspector), and you'll see that it *looks* like it affects the duplicate as well. 4. Reload the "world" scene (above viewport in bar, click X, then open "world.tscn" again) 5. You'll see that the duplicate's size returns to what it was, and only the original's size was changed you can also repeat and change the duplicate's properties instead of the original's in step 3, and the same thing happens. the original and all duplicates look like they are affected by property changes. however, when you reload the scene, only the scenes you changed were actually modified. you can also try duplicating an object multiple times, and duplicating the duplicates, and any change of any property in any of the "working set" will affect the others. ### Minimal reproduction project (MRP) [test-duplicate.zip](https://github.com/user-attachments/files/17534155/test-duplicate.zip)
bug,topic:editor,needs testing
low
Minor
2,616,642,117
godot
Keyframes added to animation from the animation panel occasionally change positions and replace other keyframes
### Tested versions Godot 4.3 Stable ### System information Windows 10 x64, Godot 4.3 Stable, NVIDIA 3060 Ti + Ryzen 5600x ### Issue description Keyframes, created from within the animation panel, can move around and replace other keyframes. Also, this error message popped up: ![image](https://github.com/user-attachments/assets/9c59d548-83ce-42bf-b788-e6cbb59ac6c3) As you see, I've tried to cancel the creation of the wrong frame, but instead it got placed over the next one (in the end of the animation), which is not the desirable outcome. ### Steps to reproduce 1. I've created the animation node in 3D scene 2. Changed duration to 20 3. Added two key frames to animate angle (0-360) 4. Made it automatically played 5. Selected a spot in between, right-clicked and created another KF 6. The frame was created not at the point of time or mouse cursors, but with offset 7. I've left-clicked on it to move, but it was gone 8. Apparently, it moved to the right and replaced my 360 cursor. ### Minimal reproduction project (MRP) You can get this project, it has animation, try to select a spot in the middle and add a keyframe (not sure if it works every time though): https://github.com/ArseniyMirniy/Godot-4-Color-Correction-and-Screen-Effects (for clear testing you can use commits made BEFORE this report, since I've changed the animation now).
bug,topic:editor,topic:animation
low
Critical
2,616,644,022
godot
FBX import issue with IK Bone animation. Character "floats".
### Tested versions Reproducible in 4.3.Stable ### System information Windows 11, Godot 4.3 stable, Blender 4.2.2 ### Issue description I have a walking animation made in Blender in which the body bumps up and down on the "up" axis of the Pelvis bone. Due to IK constraints the feet are always on the ground. When I import this animation in Godot, it initially looks good in the Import window, but when I actually inherit the scene, this movement is not there anymore, and it looks as if the legs "float in the air", as if there is no more ground under the feet. The feet "break through the floor", as it were. I hope the image below illustrates the issue better. I have added a Godot repro project, and (just in case) also the Blender blend file. ![anim_compa_ss](https://github.com/user-attachments/assets/4ddbcc3a-b0f6-40b4-a1ba-0f7168b15778) ### Steps to reproduce Create a walk animation in Blender, using a skeleton with IK applied. Keyframe a walk animation in which you move the body up and down on a hip/pelvis bone. Import the animation in Godot and conform it works in the Advanced Import Settings window. Inherit the imported file as a new scene, play the animation and confirm that the legs break through the floor and the "bump" is gone. ### Minimal reproduction project (MRP) [repro_anim_import_issue.zip](https://github.com/user-attachments/files/17534331/repro_anim_import_issue.zip) [soldier_lopoly_v1.zip](https://github.com/user-attachments/files/17534333/soldier_lopoly_v1.zip)
bug,topic:import,topic:animation
low
Minor
2,616,644,083
godot
[Tree] Relationship lines 1px wide draw incorrectly when MSAA anti aliasing is enabled
### Tested versions Reproducible in 4.3dev and previous versions. ### System information MacOS 13.6.5, Windows ### Issue description When Rendering->Anti Aliasing->Quality->MSAA 2D is enabled in the Project settings, `Tree` relationship lines 1px wide and at any odd width are anti antialiased and appear blurred. This happens both in the Editor (after re-opening a project with MSAA enabled) and in exported projects. This issue is fixed by PR #98560 Before FIX NO MSAA: ![Screenshot 2024-10-27 at 9 11 01 AM](https://github.com/user-attachments/assets/c593abbd-0c23-4a87-a669-7745976fdfbf) ![Screenshot 2024-10-27 at 9 09 05 AM](https://github.com/user-attachments/assets/398b1821-4732-4969-8cc9-71fafa5c14ab) Before FIX WITH MSAA ![Screenshot 2024-10-27 at 9 10 05 AM](https://github.com/user-attachments/assets/d3cbe921-25b6-4c34-9081-cf0e6f161b99) Notice the defect at the corner of the highlighted parent line ![Screenshot 2024-10-27 at 9 10 15 AM](https://github.com/user-attachments/assets/641015b5-e132-474c-aa13-5324c2b17962) With PR Fix: ![Screenshot 2024-10-27 at 9 14 03 AM](https://github.com/user-attachments/assets/791c2816-6dab-4d2e-8824-0f51f99a3ae7) ![Screenshot 2024-10-27 at 9 14 17 AM](https://github.com/user-attachments/assets/765b13e5-6cac-4a17-8719-b94a69b5568b) ### Steps to reproduce Enable MSAA 2D for a project, then reopen it in the Editor. Any scene tree and the FileSystem tree will exhibit the issue. ### Minimal reproduction project (MRP) See above.
bug,topic:gui
low
Minor
2,616,655,390
godot
Animation wont play during runtime.
### Tested versions - Reproducible in v4.3.stable.official [77dcf97d8] ### System information Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i7-13700KF (24 Threads) ### Issue description For the last week I've been pulling my hair out on why this one state machine is broken. I've deleted it, remade it, renamed it, done everything I can and for some reason it will not play some animations. There is a player scene with an animation tree in it with a state machine. This state machine has two more state machines inside it, one is the moving state machine and the other is the attacking state machine. These systems have been removed from the MRP and are just traveled to and from using some basic code. Inside the attacking machine is where the problem lies. The attacking machine is supposed to play an attack animation, and then at the end, a returning animation in which the player can move to get out of it. Unfortunately, the two states in there play just one animation, if I switch it to anything but the "Baked_Player_Blunt_Wep_Small_Attack_1" animation, it will not play ( technically it does but it stays at frame 0, according to the playback). However, this only occurs after switching the animation of those states by code. Setting them in the editor prompts them to work just fine, as long as they aren't tampered by the code. BUT, this only applies to any other animation than the one I listed earlier, "Baked_Player_Blunt_Wep_Small_Attack_1". Setting both states to that animation via code works perfectly fine. This has been frustrating, to say the least. ### Steps to reproduce In the MRP below, I've ripped out about 95% of the project and this still persists, yet for some reason, I can't get this situation to occur when creating a project from scratch. ### Minimal reproduction project (MRP) [MRP.zip](https://github.com/user-attachments/files/17534374/MRP.zip)
needs testing,topic:animation
low
Critical
2,616,663,980
ant-design
-ms-high-constrast is in the process of being deprecated. Please see https://blogs.windows.com/msedgedev/2024/04/29/deprecating-ms-high-contrast/ for tips on updating to the new Forced Colors Mode standard.
### Reproduction link [https://antd@5.x](https://antd@5.x) ### Steps to reproduce formitem ### What is expected? 关闭警报或者修复 ### What is actually happening? -ms-high-constrast is in the process of being deprecated. Please see https://blogs.windows.com/msedgedev/2024/04/29/deprecating-ms-high-contrast/ for tips on updating to the new Forced Colors Mode standard. | Environment | Info | | --- | --- | | antd | 5.21.5 | | React | react / react-native 版本 | | System | win11 | | Browser | edge | --- -ms-high-constrast is in the process of being deprecated. Please see https://blogs.windows.com/msedgedev/2024/04/29/deprecating-ms-high-contrast/ for tips on updating to the new Forced Colors Mode standard. <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Inactive
low
Major
2,616,672,208
PowerToys
Strange empty desktop overlay with TextExtractor active
### Microsoft PowerToys version 0.85.1 ### Installation method PowerToys auto-update ### Running as admin None ### Area(s) with issue? TextExtractor ### Steps to reproduce Start TextExtractor any way ### ✔️ Expected Behavior Only the TextExtractor overlay with the whole desktop with all windows visible as it were before activating it ### ❌ Actual Behavior On the focussed screen, a smaller copy (approx. 80%) of the content of the same screen appears, overlaying the original desktop, but missing all open windows/apps (basically an empty desktop overlaying the actual desktop). The menu of the TextExtractor appears twice, once on the edge of the original screen and once in the overlayed copy. Both menus are usable. But, as a result of that empty desktop overlay, anything in a window beneath it cannot be selected for OCR. Also, some elements of a different monitor are shown, see the HardwareMonitor Widget and Voicemeeter in the photo, both are actually on a different screen that is not visible in the photo. I just tried to make a screenshot of how that looks, but I cannot combine active text extractor and screenshot tool. I discovered, though, that if I click outside the empty desktop overlay, the overlay goes away and the TextExtractor is still active, so then I can select the text I want. Still, something is no right there. Snipping tool does not show that behavior, but for quick extraction of a small amount of text it is inferior due to the many clicks more that are necessary to get the text into the clipboard. Also, the snipping tool does not allow for small amounts of text, like a single row or a URL, resulting in even more steps until the result is usable. This is a photo of the screen showing the effect: ![Image](https://github.com/user-attachments/assets/6119b7b7-d838-4059-89e4-598d10cb107a) ### Other Software Video driver is Studio Driver, current, with Nvidia RTX 3060
Issue-Bug,Needs-Triage,Needs-Team-Response
low
Minor
2,616,680,855
rust
Upgrading from 1.81 to 1.82 create unconsistent exe file (Release only): Mingw-w64 runtime failure: 32 bit pseudo relocation
About my project https://github.com/mvvvv/StereoKit-rust I was able to create from linux (mingw 13 & 14.2) and windows (MSYS 14.2) a working windows-exe binary created with gcc. after update to 1.82 I systematically have the following error when launching the windows-exe (windows11-pro or wine): `Mingw-w64 runtime failure: 32 bit pseudo relocation at 00000001400017D9 out of range, targeting 00006FFFFAEF8E90, yielding the value 00006FFEBAEF76B3. ` It most recently worked on: rustc 1.81.0 (eeb90cda1 2024-09-04) binary: rustc commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c commit-date: 2024-09-04 host: x86_64-unknown-linux-gnu release: 1.81.0 LLVM version: 18.1.7 ### Version with regression rustc 1.82.0 (f6e511eec 2024-10-15) binary: rustc commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14 commit-date: 2024-10-15 host: x86_64-unknown-linux-gnu release: 1.82.0 LLVM version: 19.1.1 ### Backtrace Nothing more @rustbot modify labels: +regression-from-stable-to-stable -regression-untriaged
T-compiler,O-windows-gnu,C-discussion
medium
Critical
2,616,690,582
deno
Cannot read properties of undefined (reading 'bold') when Nextjs run with Turbopack
Version: Deno 2.0.3 I get it error when run with deno. ``` TypeError: Cannot read properties of undefined (reading 'bold') at getDefs (file:///D:/Me/Projects/dev/deponet-store/node_modules/.deno/next@15.0.1/node_modules/next/dist/compiled/babel/bundle.js:1848:5496) at highlight (file:///D:/Me/Projects/dev/deponet-store/node_modules/.deno/next@15.0.1/node_modules/next/dist/compiled/babel/bundle.js:1848:6631) at codeFrameColumns (file:///D:/Me/Projects/dev/deponet-store/node_modules/.deno/next@15.0.1/node_modules/next/dist/compiled/babel/bundle.js:1:77678) at formatIssue (file:///D:/Me/Projects/dev/deponet-store/node_modules/.deno/next@15.0.1/node_modules/next/dist/server/dev/turbopack-utils.js:247:20) at addErrors (file:///D:/Me/Projects/dev/deponet-store/node_modules/.deno/next@15.0.1/node_modules/next/dist/server/dev/hot-reloader-turbopack.js:810:85) at handleProjectUpdates (file:///D:/Me/Projects/dev/deponet-store/node_modules/.deno/next@15.0.1/node_modules/next/dist/server/dev/hot-reloader-turbopack.js:819:25) detached processes are not currently supported on Windows ```
needs info,nextjs
low
Critical
2,616,693,119
terminal
Problems with cursor position escape
### Windows Terminal version 1.22.2912.0 ### Windows build number 10.0.22631.4196 ### Other Software https://github.com/cracyc/msdos-player/blob/vt/msdos.cpp#L1088 ### Steps to reproduce Start it in a new console window or run in an existing one and execute a command with lots of output. ### Expected Behavior _No response_ ### Actual Behavior When the program starts it immediately gets the console window size and the cursor position which hangs forever. If run from an existing console window it starts fine but if say command.com is run and dir then it will randomly hang waiting for the reply to ESC [6n, although it will resume after a bit or a key is pressed. This does not happen on terminal version 1.21.2911.0, in legacy conhost or if GetConsoleScreenBufferInfo is used instead.
Product-Conpty,Area-Server,Issue-Bug
low
Major
2,616,708,374
kubernetes
GRPC healthprobe cannot handle TLS
### What happened? Since version 1.27 kubernetes supports the gprc healthserver for readyness and liveness probes. I now have several services that use grpc with self-signed tls certificates. The certificates are stored as secrets in the cluster. It took me quite a while to understand why my liveness probes always fail. So I rebuilt my service so that the endpoints are offered with TLS and the health server without. That works. ### What did you expect to happen? The grpc-health-probe also supports TLS connections. However, according to the documentation, parameters must be set for this. Here is the link to Repo: [Health Checking TLS Servers](https://github.com/grpc-ecosystem/grpc-health-probe?tab=readme-ov-file#health-checking-tls-servers) ### How can we reproduce it (as minimally and precisely as possible)? A GRPC server with TLS and the corresponding code for the health probe should be sufficient. Here is a small example: ```go cer, err := tls.LoadX509KeyPair("tls/server/tls.crt", "tls/server/tls.key") if err != nil { log.Println(err) return } creds := grpc.Creds(credentials.NewServerTLSFromCert(&cer)) addr := fmt.Sprintf("%s:%d", "0.0.0.0", port) listen, err := net.Listen("tcp", addr) if err != nil { log.Panicln("Error while Listening to : ", addr, err) } grpcServer := grpc.NewServer(creds) // Register health check service healthServer := health.NewServer() grpc_health_v1.RegisterHealthServer(grpcServer, healthServer) // Set the health status to SERVING for your service healthServer.SetServingStatus("APIServer", grpc_health_v1.HealthCheckResponse_SERVING) fmt.Printf("[pid %d] GRPC server Listening on port %d\n", os.Getpid(), port) // service connections if err := grpcServer.Serve(listen); err != nil { log.Panicf("listen: %s\n", err) } ``` ### Anything else we need to know? I think it would be easiest if you could tell the probe whether it is a TLS connection and, if in doubt, refer to a secret in the cluster where the CA certificate or similar is located. ### Kubernetes version <details> ```console $ kubectl version Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.5+k3s1 ``` </details>
priority/backlog,sig/node,kind/feature,triage/accepted
low
Critical
2,616,712,649
PowerToys
[Always on Top] Functionality fails on background click when foreground window is MobaXterm, background application is Excel.
### Microsoft PowerToys version 0.84.1 ### Installation method GitHub ### Running as admin Yes ### Area(s) with issue? Always on Top ### Steps to reproduce The `Always on Top` functionality is not working correctly. Currently, it has been observed that when the background application is `Microsoft Excel` and the foreground window is `MobaXterm`, clicking on the background `Excel sheets` causes the foreground `MobaXterm` to be sent to the background. However, when other applications are in the foreground(for example, the `Microsoft Calculator` or `Notepad`), clicking on Excel does not cause the foreground window to go to the background. ![Image](https://github.com/user-attachments/assets/515c4eb3-0225-407e-90a9-0d258b4b5937) --- `Always on Top` works well on applications like `Microsoft Calculator` : ![Image](https://github.com/user-attachments/assets/51d6e166-fa38-42f9-b3d6-e93daed6d57f) ### ✔️ Expected Behavior _No response_ ### ❌ Actual Behavior _No response_ ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,616,715,529
deno
`deno fmt --quiet --check` does not exit on first difference found & does not have a `--fail-fast` option
Version: Deno 2.0.3 `deno fmt` - updates files. `deno fmt --check` - if 0 issues: - prints all needed updates. - returns 0. - if 1 or more needed updates: - prints: `error: Found 1 not formatted file in 1 file`. - returns 1. `deno fmt --quiet --check` - if 0 issues: - returns 0. - if 1 or more needed updates: - prints: `error: Found 1 not formatted file in 1 file`. - returns 1. Could a `--fail-fast` please be added that short-circuits. As soon as it knows it is going to exit with a 1, it should instantly exit. `deno fmt --quiet --check --fail-fast` - if 0 issues: - returns 0. - on first found needed update: - returns 1.
good first issue,suggestion
low
Critical
2,616,719,185
yt-dlp
Shout no longer working, extractor needs update?
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region US ### Provide a description that is worded well enough to be understood https://watch.shout-tv.com/video/640292 Site was supported but stopped working a couple of months ago when they redesigned the site and now it doesn't work as expected with unsupported url. Extractor needs updating to reflect site change? Plays in browser without DRM. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), erro r utf-8 (No VT), screen utf-8 (No VT) [debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (win_exe ) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-7-6.1.7601-SP1 (OpenSSL 1. 1.1k 25 Mar 2021) [debug] exe versions: ffmpeg 7.0.2-essentials_build-www.gyan.dev (setts), ffprob e 7.0.2-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2. 3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1839 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releas es/latest Latest version: stable@2024.10.22 from yt-dlp/yt-dlp yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp) [generic] Extracting URL: https://watch.shout-tv.com/video/640292 [generic] 640292: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] 640292: Extracting information [debug] Looking for embeds ERROR: Unsupported URL: https://watch.shout-tv.com/video/640292 Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 1625, in wrapper File "yt_dlp\YoutubeDL.py", line 1760, in __extract_info File "yt_dlp\extractor\common.py", line 741, in extract File "yt_dlp\extractor\generic.py", line 2533, in _real_extract yt_dlp.utils.UnsupportedError: Unsupported URL: https://watch.shout-tv.com/video /640292 ```
site-request
low
Critical