id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,600,518,959 | pytorch | vmap with torch.autograd.grad does not work on output of compiled function | ### 🐛 Describe the bug
The following code is an example provided in the documentation of vmap:
```python
import torch
# Setup
N = 5
f = lambda x: x ** 2
x = torch.randn(N, requires_grad=True)
y = f(x)
I_N = torch.eye(N)
# Sequential approach
jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]
for v in I_N.unbind()]
jacobian = torch.stack(jacobian_rows)
# vectorized gradient computation
def get_vjp(v):
return torch.autograd.grad(y, x, v)
jacobian = torch.vmap(get_vjp)(I_N)
```
It works as expected.
If we instead use a compiled version of `f`, it does not work anymore.
```python
import torch
# Setup
N = 5
@torch.compile
def f(x):
return x ** 2
x = torch.randn(N, requires_grad=True)
y = f(x)
I_N = torch.eye(N)
# Sequential approach
jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]
for v in I_N.unbind()]
jacobian = torch.stack(jacobian_rows)
# vectorized gradient computation
def get_vjp(v):
return torch.autograd.grad(y, x, v)
jacobian = torch.vmap(get_vjp)(I_N)
```
gives the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 2, in get_vjp
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2048, in backward
out = call_compiled_backward()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1980, in call_compiled_backward
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1478, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_valerian/rb/crbllcqww3ywex2cpe7oqemxcqzr4ij3i6akxypiwhhen3uac4lf.py", line 60, in call
cpp_fused_mul_pow_0(tangents_1, primals_1, buf0)
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
Exception raised from throw_data_ptr_access_error at ../c10/core/TensorImpl.cpp:316 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f0379c0f446 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x68 (0x7f0379bb97ad in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #2: c10::TensorImpl::throw_data_ptr_access_error() const + 0x34 (0x7f0379be7874 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x8fd078 (0x7f03c41bb078 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x3bff (0x7f0309d91bff in /tmp/torchinductor_valerian/yh/cyh6svyfmlrinzemycsxqkpt5m5dd25usb5jtjtcy2qihw6hf4m4.so)
<omitting python frames>
frame #15: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x9e (0x7f03c4161e7e in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #16: <unknown function> + 0x55cafbb (0x7f03b422ffbb in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #17: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x15de (0x7f03b422a0ce in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #18: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x69f (0x7f03b422ad2f in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #19: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x3d5 (0x7f03b4225325 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #20: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x25 (0x7f03c415d5c5 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #21: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0xbac (0x7f03b422855c in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #22: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x56 (0x7f03c415d556 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #23: THPEngine_run_backward(_object*, _object*, _object*) + 0x300 (0x7f03c415b9a0 in /home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #36: __libc_start_main + 0xf3 (0x7f03c61be083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #37: _start + 0x2e (0x55681171208e in /home/valerian/.pyenv/versions/test-venv/bin/python)
```
I think this is closely related to a similar issue that has already been fixed: https://github.com/pytorch/pytorch/issues/100320
### Versions
/home/valerian/.pyenv/versions/test-venv/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-6ubuntu2) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.12.4 (main, Oct 20 2024, 16:13:26) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Stepping: 10
CPU MHz: 2200.000
CPU max MHz: 4100,0000
CPU min MHz: 800,0000
BogoMIPS: 4399.99
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1,5 MiB
L3 cache: 9 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @zou3519 @ezyang @chauhang @penguinwu @Chillee @samdow @kshitij12345 @bdhirsh @yf225 | triaged,module: vmap,oncall: pt2,module: functorch,module: pt2-dispatcher | low | Critical |
2,600,551,660 | neovim | cursor position wrong after wrapped unicode line | ### Problem
When the string "नमस्कार" is encountered after a soft-wrapped line, the insert mode cursor is always one position ahead of where the characters are actually inserted.
This is the cursor right after pressing `A`:

If I type `a` it goes right beside the "'" but the cursor is one position ahead

If I type `backspace` right after `A` it deletes the ' even though visually the cursor is far away:

### Steps to reproduce
```
echo "Very long string that spans at least one line of the terminal, very very very very very very long string 'नमस्कार'" > /tmp/unicodebug.txt
nvim --clean /tmp/unicodedebug.txt
:set wrap
```
Then press `A` to go to the last character of the line and start inserting characters, the cursor appears one position ahead of where it should be
### Expected behavior
On insert mode the characters are inserted directly on top of the cursor.
### Nvim version (nvim -v)
NVIM v0.10.2 Build type: RelWithDebInfo LuaJIT 2.1.1727870382
### Vim (not Nvim) behaves the same?
No, Vim 9.1.2024 works as expected
### Operating system/version
6.11.1-arch1-1
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
`pacman -S neovim` | bug,tui,display,unicode 💩 | low | Critical |
2,600,558,907 | rust | rustdoc should link to its current version of the rustdoc book | currently, rustdoc just links to https://doc.rust-lang.org/rustdoc/
this could lead to confusion for old crates on docs.rs when someone tries to use a new feature and it doesn't work.
perhaps, rustdoc 1.80.1 should link to https://doc.rust-lang.org/1.80.1/rustdoc/ and other versions should link to their matching documentation...
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"poliorcetics"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-rustdoc,C-enhancement,A-rustdoc-ui | low | Minor |
2,600,561,830 | pytorch | torch.div throws a floating point exception when rounding_mode is `trunc` or `floor` | ### 🐛 Describe the bug
Here is the code to reproduce:
```
import torch
x = torch.tensor([-2147483648], dtype=torch.int32)
y = torch.tensor([-1], dtype=torch.int32)
torch.div(x, y, rounding_mode='trunc')
```
When `x=-2147483648` and `y=-1`, setting the rounding_mode to `trunc` or `floor` will lead to a floating point exception. Instead, when not specifying rounding_mode (rounding_mode is None), this API will not crash.
### Versions
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.3.101 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @albanD | module: crash,triaged,module: intel,module: python frontend,module: edge cases | low | Critical |
2,600,562,001 | deno | Export feature for P-521 and X25519 cryptoKey | It would be better if Deno fully support for Webcrypto API,..
I have tested that deno can create P-521 and X25519 cryptoKey like a charm.
**P-521 cryptoKey**
- [ ] export to JWK failed on both private and public key - `NotSupportedError: Unsupported namedCurve`
- [ ] export to RAW failed - `TypeError: expected valid private EC key`
**X25519 cryptoKey**
- [ ] export to JWK failed on private key only - `NotSupportedError: Not implemented` | feat,web,ext/crypto,crypto | low | Critical |
2,600,567,485 | PowerToys | FancyZones - Wuthering Waves unable to snap | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
Open Wuthering Waves (Game)
Press Shift to snap, snap doesn't show up, nor does it work
### ✔️ Expected Behavior
Wuthering Waves snapping like any other game/application i have, it works with Genshin Impact, Star Rail, Edge, Chrome, Telegram.. it works for everything except Wuthering Waves
### ❌ Actual Behavior
Wuthering Waves doesn't snap, nor does the HUD of snapping shows. it just doesn't work. i don't know why
### Other Software
Wuthering Waves v1.3 (Game) | Issue-Bug,Needs-Triage | low | Minor |
2,600,573,383 | flutter | Autocomplete(Flutter 3.24.3), `optionsViewBuilder` does not get the last options of `optionsBuilder` | ### Steps to reproduce
Flutter 3.19.6 works fine, Flutter 3.24.3 may break something.
1. Run the example code on Windows, Linux, or MacOS.
2. Click the edit filed.
### Expected results
The options view should show three options. "aaa", "abc", "bcc"
### Actual results
It show one option "empty".
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const AutocompleteExampleApp());
class AutocompleteExampleApp extends StatelessWidget {
const AutocompleteExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text(
'Autocomplete - optionsBuilder and optionsViewBuilder'),
),
body: const Center(
child: _AsyncAutocomplete(),
),
),
);
}
}
class _AsyncAutocomplete extends StatefulWidget {
const _AsyncAutocomplete();
@override
State<_AsyncAutocomplete> createState() => _AsyncAutocompleteState();
}
class _AsyncAutocompleteState extends State<_AsyncAutocomplete> {
Iterable<String> _options = <String>['empty'];
@override
void initState() {
super.initState();
}
@override
Widget build(BuildContext context) {
return Autocomplete<String>(
optionsBuilder: (TextEditingValue textEditingValue) async {
debugPrint('================= optionsBuilder, options: $_options');
if (_options.length == 1) {
return <String>['empty'];
} else {
return [..._options];
}
},
fieldViewBuilder: (
BuildContext context,
TextEditingController fieldTextEditingController,
FocusNode fieldFocusNode,
VoidCallback onFieldSubmitted,
) {
fieldTextEditingController.text = 'aaa';
fieldFocusNode.addListener(() async {
if (fieldFocusNode.hasFocus && _options.length == 1) {
setState(() {
_options = <String>['aaa', 'abc', 'bcc'];
});
}
});
return TextField(
focusNode: fieldFocusNode,
controller: fieldTextEditingController,
);
},
optionsViewBuilder: (BuildContext context,
AutocompleteOnSelected<String> onSelected, Iterable<String> options) {
debugPrint('================= optionsViewBuilder, options: $options');
return Material(
elevation: 4.0,
child: ListView(
children: options
.map((String option) => GestureDetector(
onTap: () {
onSelected(option);
},
child: ListTile(
title: Text(option),
),
))
.toList(),
),
);
},
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs, flutter 3.24.3</summary>
```console
flutter: ================= optionsBuilder, options: [empty]
flutter: ================= optionsBuilder, options: [aaa, abc, bcc]
flutter: ================= optionsBuilder, options: [aaa, abc, bcc]
flutter: ================= optionsViewBuilder, options: [empty]
```
</details>
<details open><summary>Logs, flutter 3.19.6</summary>
```console
flutter: ================= optionsBuilder, options: [empty]
flutter: ================= optionsViewBuilder, options: [empty]
flutter: ================= optionsBuilder, options: [aaa, abc, bcc]
flutter: ================= optionsBuilder, options: [aaa, abc, bcc]
flutter: ================= optionsViewBuilder, options: [aaa, abc, bcc]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4317], locale en-US)
• Flutter version 3.24.3 on channel stable at D:\DevEnv\flutter\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (6 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\Administrator\AppData\Local\Android\sdk
X cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
X Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/windows-android-setup for more details.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.3)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.35013.160
• Windows 10 SDK version 10.0.26100.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
[√] VS Code (version 1.94.2)
• VS Code at C:\Users\Administrator\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.98.0
[!] Proxy Configuration
• HTTP_PROXY is set
! NO_PROXY is not set
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4317]
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.101
• Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.46
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| a: text input,c: regression,framework,f: material design,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Critical |
2,600,625,837 | godot | Calling Multiplayer.MultiplayerPeer.Close() on a server with compression does not free the port | ### Tested versions
- Reproducible in Godot 4.3
### System information
Windows 11
### Issue description
I am unsure if this is a bug or if i'm just missing something, but calling `Multiplayer.MultiplayerPeer.Close();`on the server does not free the port.
This happens only if compression is enabled, no matter the compression mode.
### Steps to reproduce
- Create a server
```c#
ENetMultiplayerPeer peer = new ENetMultiplayerPeer();
Error e = peer.CreateServer(8001);
if (e != Error.Ok){
GD.Print("Error when creating server: " + e);
return;
}
/* Commenting this line prevents the bug */
peer.Host.Compress(ENetConnection.CompressionMode.RangeCoder);
Multiplayer.MultiplayerPeer = peer;
```
- Stop the server
```c#
Multiplayer.MultiplayerPeer.Close();
```
At this point, the port is still used by godot:

- Create the server again using the same code in step 1
This trigger an error and outputs:
`Error when creating server: CantCreate`
### Minimal reproduction project (MRP)
[new-game-project.zip](https://github.com/user-attachments/files/17451211/new-game-project.zip)
| bug,topic:network | low | Critical |
2,600,628,517 | rust | rewrite documentation for htmldocck | the documentation is at https://github.com/rust-lang/rust/blob/a2a1206811d864df2bb61b2fc27ddc45a3589424/src/etc/htmldocck.py#L4-L119
it has several issues:
* [ ] it is hard to find, not linked to from rustc-dev-guide
* [ ] it has not be updated to reflect the fact the syntax has changed from `// @foo` to `//@ foo`.
* [ ] says "only absolute paths are supported", but then requires xpaths to start with `//`, not `/`. | T-rustdoc,C-enhancement,A-docs,C-bug | low | Minor |
2,600,687,290 | PowerToys | Copy as UNC | ### Description of the new feature / enhancement
When working on a company network that has mapped drive letters, it can be frustrating when browsing to a path, copying the path and after pasting the mapped drive letter is included rather than the UNC path, as the UNC path is required.
### Scenario when this would be used?
When a user has an in-house or Line of business application and the software requires the use of the UNC path vs the mapped drive path, it can save time and frustration for the user rebuilding the path. This would normally need, pasting the mapped drive path into a text editor, then looking up where the mapped drive points, then getting the start of the UNC path \\\\Server\share and appending that to the rest of the path for the mapped drive text and when the user has to do this many times during the day, Copy as UNC just as Copy as Path could save lots of time, the use case is very similar however it applies to network shares vs local paths.
### Supporting information
I have built a proof of concept for this feature to help demonstrate the idea: https://github.com/RamblingGeekUK/Copy-as-UNC | Idea-New PowerToy | low | Major |
2,600,696,089 | opencv | OpenCV Members not recognized by PyLint | ### System Information
# Summary
when working with cv2 and saving files, PyLint does complain it cannot find the cv2 members, hence, flooding the Problems tab with messages, which are in fact no problem at all. Below is a MWE to reproduce some of the messages.
### Detailed description
Tested it without being an extension, by directly installing the latest version. Seems to cause the same issue, nevertheless:
```console
pylint --version
```
pylint 3.3.1
astroid 3.3.5
Python 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
```console
pylint opencv_mwe_pylint_err.py
```
************* Module object_tracker.opencv_mwe_pylint_err
opencv_mwe_pylint_err.py:5:6: E1101: Module 'cv2' has no 'VideoCapture' member (no-member)
opencv_mwe_pylint_err.py:6:25: E1101: Module 'cv2' has no 'createBackgroundSubtractorMOG2' member (no-member)
opencv_mwe_pylint_err.py:10:22: E1101: Module 'cv2' has no 'findContours' member (no-member)
opencv_mwe_pylint_err.py:11:21: E1101: Module 'cv2' has no 'RETR_EXTERNAL' member (no-member)
opencv_mwe_pylint_err.py:11:40: E1101: Module 'cv2' has no 'CHAIN_APPROX_SIMPLE' member (no-member)
opencv_mwe_pylint_err.py:14:0: E1101: Module 'cv2' has no 'destroyAllWindows' member (no-member)
------------------------------------------------------------------
Your code has been rated at 0.00/10 (previous run: 0.00/10, +0.00)
```console
pip list
```
...
opencv-python 4.10.0.84
### Steps to reproduce
```python
"""Minimum Working Example showing the issue that PyLint does not recognize cv2 Members"""
import cv2
cap = cv2.VideoCapture("test.mp4")
background_substractor = cv2.createBackgroundSubtractorMOG2()
ret, frame = cap.read()
foreground_mask = background_substractor.apply(frame)
contours, hierarchy = cv2.findContours(
foreground_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
cv2.destroyAllWindows()
cap.release()
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: python bindings | low | Minor |
2,600,701,475 | terminal | Half-width Katakana and (han)dakuten should not overlap/combine. | ### Windows Terminal version
1.23.2913.0
### Windows build number
10.0.26100.2033 ARM64
### Explanations
I believe there is an error in the code for grapheme clusters text width computation in the current version of Windows Terminal (tested in Preview and Canary).
Japanese in the terminal can be tricky. For historical reasons there are two sets of katakana, a full-width that fits square / double-cells like hiragana and kanji, and a half-width set that fits single cells like ASCII text does.
The problem is how these handle dakuten (and handakuten, but I'll use dakuten to refer to both from now on), which are the Japanese equivalent of accents, and like other diacritical marks, can be combining or not… We have 3 sets of them, a non-combining half-width version, a non-combining full-width version, and a combining full-width version, plus precomposed characters as well.
U+3099 ゛ COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK
U+309A ゜ COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK
U+309B ゛ KATAKANA-HIRAGANA VOICED SOUND MARK
U+309C ゜ KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK
U+FF9E ゙ HALFWIDTH KATAKANA VOICED SOUND MARK
U+FF9F ゚ HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK
Take `Windows Terminal` written in Japanese: `ウィンドウズ・ターミナル`.
The `ド` is `ト` with an extra `゛` mark, and `ズ` is `ス` with an extra `゛` mark.
There are 46 katakana, plus 9 small forms, which required their own glyphs in old terminals and PCs, and a large part of them can combine with `゛` or/and `゜`, yielding an extra 30 common combined katakana, and some foreign sounds can be represented using less common combinations, for a total of 92 katakana glyphs variations. Add the Japanese punctuation characters, and we reach over 100 symbols.
So while glyphs representing the combined katakana+dakuten is desirable and better looking, old systems didn't combine them, and used the main katakana glyph, followed by the (han)dakuten glyph. This worked pretty well for half-width katakana, as they felt squeezed, and the dakuten as a second character cell basically just made those square again.
So in half-width katakana, Windows Terminal is written `ウィンドウズ・ターミナル`. Note how the `ド` is represented using the two glyphs `ド`, and `ズ` with the two glyphs `ズ`.
When handling them as grapheme clusters, is makes sense to handle the half-width katakana+dakuten as a single group, they should never be separated. But when displayed in a console or terminal, they are separate characters, and probably should be handled separately, as in legacy systems such as those using Shift-JIS (MS-DOS and Windows codepage 932), they were really separate characters and dakuten could be placed anywhere by themselves.
Even more important, when displaying them, they do not combine or overlap!
The following behavior is the correct and expected way to show them in a terminal:

And is the way it works in Windows Terminal Canary with the `wcswidth` text measurement mode.
But when using the `Grapheme clusters` text measurement mode, half-width handakuten are handled like combining diacritic, overlapping the previous katakana:

So to be clear, `U+3099` and `U+309A` are full-width combining, while `U+309B` and `U+309C` are full-width, `U+FF9E` and `U+FF9F` are half-width, all non-combining.
`ウィンドウズ・ターミナル` is full-width using precomposed characters, `ウィンドウズ・ターミナル` is full-width using combining dakuten, `ウィント゛ウス゛・ターミナル` is full-width using non-combining dakuten, and `ウィンドウズ・ターミナル` is half-width, which is always non-combining.
I think for Windows Terminal, the grapheme clusters code should not group half-width katakana with dakuten or handakuten. It would fix the text measurement issue and users probably expect to be able to navigate between those characters as if they were completely separate for cursor navigation.
### Expected Behavior
Half-width katakana shouldn't have dakuten overlapping them.
### Actual Behavior
Half-width katakana has dakuten overlapping them as if they were combining diacritical marks. | Area-Rendering,Area-Fonts,Issue-Bug,Product-Terminal | low | Critical |
2,600,749,811 | rust | Crater runs for 1.83 | Note: Please do not conduct triage on these runs without discussing how to do so with a release team member first. Thanks! | S-waiting-on-review,T-release | medium | Major |
2,600,779,098 | vscode | Images barely work in JsDoc | I'm trying to add an image to a JsDoc, but it works only in some specific cases.
(Everything pointed out in this issue works perfectly in normal MarkDown)
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2
- OS Version: Windows 11
Steps to Reproduce:
1. Open a new JavaScript tab
2. Paste the provided code
3. Hover `a`
Code:
```js
/**
* (The images are from random repos)
*
* ---
*
* Completely stripped out regardless (Tag "img")
*
* <img src="https://raw.githubusercontent.com/ReturnInfinity/BareMetal-OS/refs/heads/master/images/BareMetal%20OS%20-%20Dark.png">
*
* ---
*
* Works (.png)
*
* 
*
* ---
*
* Doesn't work (.svg)
*
* 
*
* ---
*
* Works (Same .svg with proxy)
*
* 
*
* ---
*
* Doesn't work (Url generated by pasting the same .svg in a GitHub issue)
*
* 
*
* ---
*
* Works (Same .svg with data URL)
*
* 
*/
const a = 1;
```
I tried going to "Help" > "Toggle Developer Tools" and this is what the console is telling me

These lines are logged each time I hover `a` | help wanted,feature-request,typescript,javascript | low | Critical |
2,600,779,321 | awesome-mac | 🎉 Add Mailmate | ### 🪩 Provide a link to the proposed addition
https://freron.com
### 😳 Explain why it should be added
Mailmate is one of the strongest Mail clients for Mac. It has Spam Sieve integrated and has a very powerful tagging and search function. Nothing matches Mailmate's search.
### 📖 Additional context
_No response_
### 🧨 Issue Checklist
- [X] I have checked for other similar issues
- [X] I have explained why this change is important
- [X] I have added necessary documentation (if appropriate) | addition | low | Minor |
2,600,785,684 | go | x/net/trace: registered routes conflict with "GET /" | ### Go version
go version go1.23.2 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/alex/Library/Caches/go-build'
GOENV='/Users/alex/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/alex/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/alex/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.2/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.2/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='on'
GOTELEMETRYDIR='/Users/alex/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/alex/Code/test-trace-bug/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/lg/kw6gcgh126lf1nw8q7wjssc00000gn/T/go-build2664326740=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
https://go.dev/play/p/oQEzMhqVvwn
When I added dependency "cloud.google.com/go/storage" (or any other google library that has sub dependency of "golang.org/x/net/trace") + handler func with "GET /", go run panics with error:
`panic: pattern "GET /" (registered at /Users/alex/Code/test-trace-bug/main.go:12) conflicts with pattern "/debug/requests" (registered at /Users/alex/go/pkg/mod/golang.org/x/net@v0.29.0/trace/trace.go:130):`
`GET / matches fewer methods than /debug/requests, but has a more general path pattern`
### What did you see happen?
output of the go run command with the example program(https://go.dev/play/p/oQEzMhqVvwn)
```
-> % go run main.go
panic: pattern "GET /" (registered at /Users/alex/Code/test-trace-bug/main.go:12) conflicts with pattern "/debug/requests" (registered at /Users/alex/go/pkg/mod/golang.org/x/net@v0.29.0/trace/trace.go:130):
GET / matches fewer methods than /debug/requests, but has a more general path pattern
goroutine 1 [running]:
net/http.(*ServeMux).register(...)
/opt/homebrew/Cellar/go/1.23.2/libexec/src/net/http/server.go:2797
net/http.HandleFunc({0x1059e1273?, 0x1069f3860?}, 0x0?)
/opt/homebrew/Cellar/go/1.23.2/libexec/src/net/http/server.go:2791 +0x9c
main.main()
/Users/alex/Code/test-trace-bug/main.go:12 +0x50
exit status 2
```
### What did you expect to see?
Expected build to work.
Let me know if this is an error related to x/net/trace or google cloud storage package.
| NeedsDecision | low | Critical |
2,600,850,761 | deno | [ERR_HTTP2_SOCKET_UNBOUND]: The socket has been disconnected from the Http2Session | Version: Deno 2.0.2 (stable, release, aarch64-apple-darwin)
I am working with Pulumi's Automation API and whenever I call `stack.preview()` or `stack.up()` which relies heavily on gRPC, I'm getting this error:
```
>>> Error in Http2Server Error [ERR_HTTP2_SOCKET_UNBOUND]: The socket has been disconnected from the Http2Session
at Object.get (node:http2:92:19)
at Http2Server.<anonymous> ([project's path]/node_modules/@grpc/grpc-js/src/server.ts:1638:25)
at Http2Server.emit (ext:deno_node/_events.mjs:393:28)
at Http2Server.<anonymous> (node:http2:1214:14)
at Http2Server.emit (ext:deno_node/_events.mjs:393:28)
at TCP._onconnection [as onconnection] (node:net:1127:8)
at TCP.#accept (ext:deno_node/internal_binding/tcp_wrap.ts:358:12)
at eventLoopTick (ext:core/01_core.js:175:7) {
code: "ERR_HTTP2_SOCKET_UNBOUND",
name: "Error"
}
```
This is the error Im getting from Pulumi when executing `stack.preview()`:
```
error: failed to discover plugin requirements: connection error: desc = "error reading server preface: read tcp 127.0.0.1:55811->127.0.0.1:55807: use of closed network connection"
Previewing update (dev)
```
- as you notice, the error seems like a result of Deno's issue with gRPC (socket being disconnected from the Http2Session) | bug,node compat | low | Critical |
2,600,868,914 | deno | 🐛 WinOS/WSL-1 ~ running `deno install` with Deno.Command has errors | Version: Deno 1.44.0+, including Deno 2.0+
Under WinOS/WSL-1, executing `deno install ...` using Deno.Command has errors and takes a very long time to complete.
```shell
$ time deno install -Afg https://cdn.jsdelivr.net/gh/rivy/deno.dxx@4a5f3bba/eg/args.ts
✅ Successfully installed args
/home/toor/.deno/bin/args
real 0m0.160s
user 0m0.250s
sys 0m0.125s
$ time deno eval "const c = new Deno.Command('deno', { args: ['install','-Afg','https://cdn.jsdelivr.net/gh/rivy/deno.dxx@4a5f3bba/eg/args.ts'] }); const p = c.spawn(); await p.output();"
Could not initialize cache database '/home/toor/.cache/deno/dep_analysis_cache_v2', deleting and retrying... (locking protocol
Caused by:
Error code 15: Database lock protocol error)
Failed to open cache file '/home/toor/.cache/deno/dep_analysis_cache_v2', opening in-memory cache.
✅ Successfully installed args
/home/toor/.deno/bin/args
real 0m33.035s
user 0m0.219s
sys 0m0.266s
```
The change happened when upgrading from 1.43.6 to 1.44.0.
The slow-down and error don't occur in WSL-2.
Maybe just add a simple WSL-1 check and fallback immediately to the in-memory cache for that case? That should fix the error and the majority of speed issue if a further refactor isn't desired.
Alternatively, some flag to avoid the database probe might help as the user could check for WSL-1 and use `deno install ...` is a more system efficient manner. | bug,windows,install | low | Critical |
2,600,892,670 | ui | [bug]: SheetTitle Error on Sidebar | ### Describe the bug
Hi everyone,
I found a small bug in the mobile view of the sidebars. The mobile sidebar is build on top of the sheet component and therefore requires a SheetTitle.
The error is:
`DialogContent` requires a `DialogTitle` for the component to be accessible for screen reader users.
If you want to hide the `DialogTitle`, you can wrap it with our VisuallyHidden component.
For more information, see https://radix-ui.com/primitives/docs/components/dialog
> To resolve the issue, you can either add the SheetTitle or use the Radix VisuallyHidden component to hide the Sheet Title.
` <VisuallyHidden.Root>
<SheetTitle>Menu</SheetTitle>
</VisuallyHidden.Root>`
In particular: The sidebar.tsx has the following code:
` if (isMobile) {
return (
<Sheet open={openMobile} onOpenChange={setOpenMobile} {...props}>
<SheetContent
data-sidebar="sidebar"
data-mobile="true"
className="w-[--sidebar-width] bg-sidebar p-0 text-sidebar-foreground [&>button]:hidden"
style={
{
"--sidebar-width": SIDEBAR_WIDTH_MOBILE,
} as React.CSSProperties
}
side={side}
>
<VisuallyHidden.Root>
<SheetTitle>Menu</SheetTitle>
</VisuallyHidden.Root>
<div className="flex h-full w-full flex-col">{children}</div>
</SheetContent>
</Sheet>
)
}`
Just add it above the div to visually hide it but fix the sheet error,
<img width="797" alt="screeshot" src="https://github.com/user-attachments/assets/be556648-a9fd-45e4-b31b-a9fe653fb158">
### Affected component/components
sidebar.tsx
### How to reproduce
1. Use mobile sidebar
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
-
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,600,908,476 | terminal | CMD's `start` command and `Start-Process pwsh` opens all new tabs in new second instance of WT, despite settings | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.19044.5011
### Other Software
_No response_
### Steps to reproduce
Edit: This can also be done from Powershell using `start-process pwsh` (and likely `start powershell`)
1. Open Windows Terminal with no other instances already running.
2. In that same window, open Command Prompt profile (unless it is your default launch profile)
3. Execute `start` command.
4. Execute `start` command from the same window as step 3.
5. Execute `start` command on the new spawned window from steps 3 or 4.
### Startup settings:

### Expected Behavior
- With step 3, `start` should open a new tab in the same window we just opened.
- With step 4, `start` should open a new tab in the same window we just opened, again.
- With step 5, if executing `start` from a new window, it should open a new tab in that same window.
- If step 3 and 4 go as expected, step 5 shouldn't even exist.
### Actual Behavior
- With step 3, `start` opens a new tab in a new window. (unexpected)
- With step 4, `start` opens a new tab in the same new window opened in step 2. (unexpected)
- With step 5, `start` in the same new window as step 3 opens a new tab in the same new window opened in step 3 (this one works as expected). | Issue-Bug,Product-Terminal,Area-Windowing | low | Major |
2,600,911,250 | storybook | [Bug]: Storybook can't compile, build and run after use "pnpm patch ..." | ### Describe the bug
Hi,
I have used a simple patch feature from pnpm to patch a next package.
I made two changes in "node_modules/next/dist/server/lib/start-server.js" file:
`...
const _trace = require("../../trace");
const _ispostpone = require("./router-utils/is-postpone");
const _logger = require('pino-http')(); // HERE
function _interop_require_default(obj) {
return obj && obj.__esModule ? obj : {
...`
```
...
async function requestListener(req, res) {
if (!/^(\/_next\/static|\/_next\/image|\/favicon.ico)/.test(req.url)) _logger(req, res); // AND HERE
try {
if (handlersPromise) {
...
```
1. pnpm patch next
2. made changes from above description
3. pnpm patch-commit ...
after that when I'm trying to run storybook I get:
`WARN The following packages are incompatible with Storybook 8.3.5 as they depend on different major versions of Storybook packages:
WARN - @storybook/addon-postcss@2.0.0
WARN
WARN
WARN Please consider updating your packages or contacting the maintainers for compatibility details.
WARN For more on Storybook 8 compatibility, see the linked GitHub issue:
WARN https://github.com/storybookjs/storybook/issues/26031
info => Starting manager..
info => Starting preview..
info Addon-docs: using MDX3
info => Using PostCSS preset with postcss@7.0.39
info => Using SWC as compiler
info => [@storybook/addon-styling-webpack] Applying custom Storybook webpack configuration styling.
info => Using default Webpack5 setup
<i> [webpack-dev-middleware] wait until bundle finished
10% building 0/3 entries 4/10 dependencies 0/3 modulesinfo Using tsconfig paths for react-docgen
ERROR in ./storybook-config-entry.js 11:531-785
Module not found: Error: Can't resolve 'D:/projects/starters/test/nextjs14-starter-eslint9/node_modules/.pnpm/@storybook+nextjs@8.3.5_esbuild@0.23.1_next@14.2.15_patch_hash=rqvkzn2fzygtfj7gz4svhchqyu_@ba_g6bqqg6stl5ajfeudztydpdwzu/node_modules/@storybook/nextjs/dist/preview.mjs' in 'D:\projects\starters\test\nextjs14-starter-eslint9'
ERROR in ./storybook-config-entry.js 32:0-35:2
Module not found: Error: Can't resolve 'D:/projects/starters/test/nextjs14-starter-eslint9/node_modules/.pnpm/@storybook+nextjs@8.3.5_esbuild@0.23.1_next@14.2.15_patch_hash=rqvkzn2fzygtfj7gz4svhchqyu_@ba_g6bqqg6stl5ajfeudztydpdwzu/node_modules/@storybook/nextjs/dist/preview.mjs' in 'D:\projects\starters\test\nextjs14-starter-eslint9'
preview compiled with 2 errors
=> Failed to build the preview
99% end closing watch compilationWARN Force closed preview build
SB_BUILDER-WEBPACK5_0003 (WebpackCompilationError): There were problems when compiling your code with Webpack.
Run Storybook with --debug-webpack for more information.
at starter (.\node_modules\.pnpm\@storybook+builder-webpack5@8.3.5_esbuild@0.23.1_storybook@8.3.5_typescript@5.6.3\node_modules\@storybook\builder-webpack5\dist\index.js:1:8004)
at starter.next (<anonymous>)
at Module.start (.\node_modules\.pnpm\@storybook+builder-webpack5@8.3.5_esbuild@0.23.1_storybook@8.3.5_typescript@5.6.3\node_modules\@storybook\builder-webpack5\dist\index.js:1:9972)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
WARN Broken build, fix the error above.
WARN You may need to refresh the browser.
√ Would you like to help improve Storybook by sending anonymous crash reports? ... yes`
When I revert the changes storybook works fine again.
`pnpm patch-remove next@14.2.15`
To reproduce it please use this code https://github.com/wmitrus/nextjs14-starter-eslint9
and do the patch using above commands.
### Reproduction link
https://github.com/wmitrus/nextjs14-starter-eslint9
### Reproduction steps
1. you can clone my project: https://github.com/wmitrus/nextjs14-starter-eslint9
2. pnpm patch next
3. do this changes
`...
const _trace = require("../../trace");
const _ispostpone = require("./router-utils/is-postpone");
const _logger = require('pino-http')(); // HERE
function _interop_require_default(obj) {
return obj && obj.__esModule ? obj : {
...`
```
...
async function requestListener(req, res) {
if (!/^(\/_next\/static|\/_next\/image|\/favicon.ico)/.test(req.url)) _logger(req, res); // AND HERE
try {
if (handlersPromise) {
...
```
5. Commit patch: pnpm patch-commit <<path generated by pnpm patch>>
6. Run "pnpm storybook"
### System
$ npx storybook@latest info
Need to install the following packages:
storybook@8.3.6
Ok to proceed? (y)
npm WARN deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm WARN deprecated rimraf@2.6.3: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (12) x64 Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
Binaries:
Node: 18.20.4 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.21 - ~\AppData\Roaming\npm\yarn.CMD
npm: 10.2.4 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.4.0 - ~\AppData\Local\pnpm\pnpm.CMD <----- active
Browsers:
Edge: Chromium (127.0.2651.86)
npmPackages:
@storybook/addon-essentials: 8.3.5 => 8.3.5
@storybook/addon-interactions: 8.3.5 => 8.3.5
@storybook/addon-links: 8.3.5 => 8.3.5
@storybook/addon-onboarding: 8.3.5 => 8.3.5
@storybook/addon-postcss: ^2.0.0 => 2.0.0
@storybook/addon-styling-webpack: ^1.0.0 => 1.0.0
@storybook/blocks: 8.3.5 => 8.3.5
@storybook/nextjs: 8.3.5 => 8.3.5
@storybook/react: 8.3.5 => 8.3.5
@storybook/test: 8.3.5 => 8.3.5
eslint-plugin-storybook: ^0.9.0 => 0.9.0
storybook: 8.3.5 => 8.3.5
### Additional context
_No response_ | bug,nextjs,pnpm | low | Critical |
2,600,912,981 | ollama | When server is bound to 0.0.0.0, it should allow also communication redirected by netsh to localhost (issue specific to with WSL2) | I have ollama server running within WSL2, on Win10. I want to access it from outside. The WSL2 needs to extra tricks to get the network traffic to reach it.
When I set a netsh rule that takes the outside traffic (allowed by Win firewall) and redirects to "WSL2-IP":11434
netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=64065 connectaddress=172.18.200.13 connectport=11434
it all works when ollama config has this:
Environment="OLLAMA_HOST=0.0.0.0"
I can connect to: http://"machine IP":64065
and get ollama to respond!
But the problem is that the WSL2 IP is dynamic and will change .. so ideally I would use this netsh rule:
netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=64065 connectaddress=localhost connectport=11434
and just keep ollama listening only on localhost
but this somehow does not work and the communication is lost.
Interestingly enough using localhost in netsh for open-webui sever works well!
Maybe I miss some issue but I could not find another solution to this.
| feature request,windows,wsl | low | Minor |
2,600,926,440 | opencv | approxPolyN is not found in python OSX version '4.10.0' | ### System Information
OpenCV python version: 4.10.0
Operating System / Platform: MacOS Sonoma 14.4.1 (23E224)
Python version: 3.12.5
### Detailed description
When I try to call cv2.approxPolyN the method is not found. cv.2approxPolyDP works fine.
### Steps to reproduce
1. import cv2
2. try to call approxPolyN
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug | low | Minor |
2,600,932,957 | ui | [bug]: Astro Dark Mode Guide Flicker | ### Describe the bug
Based on the current dark mode setup guide for Astro, `https://ui.shadcn.com/docs/dark-mode/astro`, there's a flicker of the dark mode on the page as seen in the video below.
The flicker happens and is noticeable when my theme is set to dark or system (that is dark mode).
The issue seems to be tied to the `src/components/ModeToggle.tsx` component and the dark mode state changes.
Basically, the theme is initially set to "theme-light" in `React.useState`,
```tsx
const [theme, setThemeState] = React.useState<
"theme-light" | "dark" | "system"
>("theme-light");
```
I tried to console.log the theme in the second `React.useEffect` check where the dark mode is set on the `documentElement` and realised that initially, it's set to "theme-light" which comes from the initial theme state and then, it's later set to the dark(as seen in the console in the video). This causes a flicker as the theme will first be set to light mode by removing the dark class from the `documentElement` and then later added. Even though, it's quite fast, it's still noticeable as a flicker on the page.
I suggest an update to the documentation by initially setting the theme to null (so in typescript, we can accept null as one of the union types), then performing a null check and only updating the `documentElement` if the theme is not null. This fixes the issue of flicker in dark mode. Updated code below:
```tsx
const [theme, setThemeState] = React.useState<
"theme-light" | "dark" | "system" | null
>(null);
React.useEffect(() => {
const isDarkMode = document.documentElement.classList.contains("dark");
setThemeState(isDarkMode ? "dark" : "theme-light");
}, []);
React.useEffect(() => {
if (theme !== null) {
const isDark =
theme === "dark" ||
(theme === "system" &&
window.matchMedia("(prefers-color-scheme: dark)").matches);
document.documentElement.classList[isDark ? "add" : "remove"]("dark");
}
}, [theme]);
```
[Screen recording 2024-10-20 21.59.36.webm](https://github.com/user-attachments/assets/3e4c56ae-03f9-4255-b97e-0e0ab27e6c42)
### Affected component/components
Dark Mode
### How to reproduce
1. Go to [shadcn astro installation](https://ui.shadcn.com/docs/installation/astro) to set up a new astro project with shadcn.
2. Setup dark mode with the [shadcn astro dark mode](https://ui.shadcn.com/docs/dark-mode/astro) documentation.
3. Change your theme to dark mode and refresh page
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/withastro-astro-awu25v?file=src%2Fcomponents%2FModeToggle.tsx
### Logs
```bash
// console logs
{
"theme": "theme-light"
}
{
"theme": "dark"
}
```
### System Info
```bash
-
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,600,935,382 | flutter | [local_auth] Non-biometric authentication shows the authentication dialog twice and may freeze the app | ### Steps to reproduce
Please see the following video: I have attached the code for the app as well.
**Note:** The issue happens only with Android 10.
When using fingerprint authentication, it works correctly. However, when I attempt to use other BiometricType options like iris or PIN, the following issues occur:
The bottom sheet displays twice:
The first time, when I draw the symbol, it hides the bottom sheet.
Then, it shows the bottom sheet again.
After pressing the "Use Pattern" button, the bottom sheet hides, but I cannot interact with any buttons on the screen afterward; it becomes unresponsive, as if frozen. Additionally, it does not show whether the status is success or error.
i tried to do
1. tested the code on a real device.
2. tried using both the old and new versions of the local_auth package.
3. created a new project and added the USE_BIOMETRIC permission, as well as FlutterFragmentActivity.
4. ran flutter clean.
However, the bug is still not resolved. I tested the code on other Android versions, such as Android 14, and it works, but it does not work on Android 10.
### Expected results
When using biometric authentication options other than fingerprint (such as iris or PIN), the following should occur:
The bottom sheet should display only once when initiating the authentication process.
Upon drawing the symbol or selecting the "Use Pattern" button, the bottom sheet should hide without any UI issues.
After the bottom sheet is closed, the app should remain responsive, allowing interaction with all buttons.
The app should provide feedback indicating whether the authentication was successful or if there was an error.
### Actual results
When attempting to use biometric authentication with options other than fingerprint (such as iris or PIN), the following issues occur:
The bottom sheet displays twice:
The first time, it hides when I draw the symbol.
It then reappears immediately afterward.
After pressing the "Use Pattern" button, the bottom sheet hides, but the screen becomes unresponsive. I cannot interact with any buttons, and it appears as if the app is frozen.
### Code sample
<details open><summary>Code sample</summary>
```dart
// Copyright 2013 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:local_auth/local_auth.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
final LocalAuthentication auth = LocalAuthentication();
_SupportState _supportState = _SupportState.unknown;
bool? _canCheckBiometrics;
List<BiometricType>? _availableBiometrics;
String _authorized = 'Not Authorized';
bool _isAuthenticating = false;
@override
void initState() {
super.initState();
auth.isDeviceSupported().then(
(bool isSupported) => setState(() => _supportState = isSupported
? _SupportState.supported
: _SupportState.unsupported),
);
}
Future<void> _checkBiometrics() async {
late bool canCheckBiometrics;
try {
canCheckBiometrics = await auth.canCheckBiometrics;
} on PlatformException catch (e) {
canCheckBiometrics = false;
print(e);
}
if (!mounted) {
return;
}
setState(() {
_canCheckBiometrics = canCheckBiometrics;
});
}
Future<void> _getAvailableBiometrics() async {
late List<BiometricType> availableBiometrics;
try {
availableBiometrics = await auth.getAvailableBiometrics();
} on PlatformException catch (e) {
availableBiometrics = <BiometricType>[];
print(e);
}
if (!mounted) {
return;
}
setState(() {
_availableBiometrics = availableBiometrics;
});
}
Future<void> _authenticate() async {
bool authenticated = false;
try {
setState(() {
_isAuthenticating = true;
_authorized = 'Authenticating';
});
authenticated = await auth.authenticate(
localizedReason: 'Let OS determine authentication method',
options: const AuthenticationOptions(
stickyAuth: true,
),
);
setState(() {
_isAuthenticating = false;
});
} on PlatformException catch (e) {
print(e);
setState(() {
_isAuthenticating = false;
_authorized = 'Error - ${e.message}';
});
return;
}
if (!mounted) {
return;
}
setState(
() => _authorized = authenticated ? 'Authorized' : 'Not Authorized');
}
Future<void> _authenticateWithBiometrics() async {
bool authenticated = false;
try {
setState(() {
_isAuthenticating = true;
_authorized = 'Authenticating';
});
authenticated = await auth.authenticate(
localizedReason:
'Scan your fingerprint (or face or whatever) to authenticate',
options: const AuthenticationOptions(
stickyAuth: true,
biometricOnly: true,
),
);
setState(() {
_isAuthenticating = false;
_authorized = 'Authenticating';
});
} on PlatformException catch (e) {
print(e);
setState(() {
_isAuthenticating = false;
_authorized = 'Error - ${e.message}';
});
return;
}
if (!mounted) {
return;
}
final String message = authenticated ? 'Authorized' : 'Not Authorized';
setState(() {
_authorized = message;
});
}
Future<void> _cancelAuthentication() async {
await auth.stopAuthentication();
setState(() => _isAuthenticating = false);
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Plugin example app'),
),
body: ListView(
padding: const EdgeInsets.only(top: 30),
children: <Widget>[
Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
if (_supportState == _SupportState.unknown)
const CircularProgressIndicator()
else if (_supportState == _SupportState.supported)
const Text('This device is supported')
else
const Text('This device is not supported'),
const Divider(height: 100),
Text('Can check biometrics: $_canCheckBiometrics\n'),
ElevatedButton(
onPressed: _checkBiometrics,
child: const Text('Check biometrics'),
),
const Divider(height: 100),
Text('Available biometrics: $_availableBiometrics\n'),
ElevatedButton(
onPressed: _getAvailableBiometrics,
child: const Text('Get available biometrics'),
),
const Divider(height: 100),
Text('Current State: $_authorized\n'),
if (_isAuthenticating)
ElevatedButton(
onPressed: _cancelAuthentication,
child: const Row(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text('Cancel Authentication'),
Icon(Icons.cancel),
],
),
)
else
Column(
children: <Widget>[
ElevatedButton(
onPressed: _authenticate,
child: const Row(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text('Authenticate'),
Icon(Icons.perm_device_information),
],
),
),
ElevatedButton(
onPressed: _authenticateWithBiometrics,
child: Row(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text(_isAuthenticating
? 'Cancel'
: 'Authenticate: biometrics only'),
const Icon(Icons.fingerprint),
],
),
),
],
),
],
),
],
),
),
);
}
}
enum _SupportState {
unknown,
supported,
unsupported,
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/7c177b95-83b4-49bd-8a2c-527c78440ff3
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.5 23F79 darwin-arm64, locale en-YE)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.3)
[✓] VS Code (version 1.94.2)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!```
</details>
| platform-android,p: local_auth,package,e: OS-version specific,has reproducible steps,P2,team-android,triaged-android,found in release: 3.24,found in release: 3.27 | low | Critical |
2,601,029,271 | deno | Inconsistent `DENO_INSTALL_ROOT` behaviour | Version: Deno 2.0.0
As stated in the documentation, `DENO_INSTALL_ROOT` should default to `$HOME/.deno/bin`. However, in `get_installer_root`, it defaults to simply `$HOME/.deno` (https://github.com/denoland/deno/blob/473e3069de4bf5877a6f1140aa0462e05f745536/cli/tools/installer.rs#L117-L137), and the `bin` is only added later (https://github.com/denoland/deno/blob/473e3069de4bf5877a6f1140aa0462e05f745536/cli/tools/installer.rs#L212-L218, https://github.com/denoland/deno/blob/473e3069de4bf5877a6f1140aa0462e05f745536/cli/tools/installer.rs#L430-L436). This causes some weird behaviour when `DENO_INSTALL_ROOT` is set (for example, if it is set to `$HOME/.deno/bin`, packages will be installed to `$HOME/.deno/bin/bin`). | bug,good first issue,install | low | Minor |
2,601,040,592 | react | Bug: exhaustive-deps rule doesn't report for useInsertionEffect | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 18
eslint-plugin-react-hooks version: 4.6.2
## Steps To Reproduce
Run eslint. with eslint-plugin-react-hooks with the following two code:
1. useEffect:
```jsx
const App = ({ id }: { id: string }) => {
useEffect(() => {
document.title = id;
}, []);
};
```
2. useInsertionEffect:
```jsx
const App = ({ id }: { id: string }) => {
useInsertionEffect(() => {
document.title = id;
}, []);
};
```
## The current behavior
The plugin reports a warning for the first case and doesn't for the second.
## The expected behavior
For both cases the plugin should report a warning.
As a workaround, I currently have to update the config:
```
'react-hooks/exhaustive-deps': [
'warn',
{
additionalHooks: 'useInsertionEffect',
},
],
``` | Status: Unconfirmed | medium | Critical |
2,601,072,367 | three.js | wireframe = true; Forces the drawIndexed / drawIndirectIndexed path | ### Description
wireframe = true; Forces the drawIndexed / drawIndirectIndexed path even if the geometry does not have an index attribute.
This is because of this function in: ```renderers/common/Geometries.js ```
```
getIndex( renderObject ) {
const { geometry, material } = renderObject;
let index = geometry.index;
if ( material.wireframe === true ) {
const wireframes = this.wireframes;
let wireframeAttribute = wireframes.get( geometry );
if ( wireframeAttribute === undefined ) {
wireframeAttribute = getWireframeIndex( geometry );
wireframes.set( geometry, wireframeAttribute );
} else if ( wireframeAttribute.version !== getWireframeVersion( geometry ) ) {
this.attributes.delete( wireframeAttribute );
wireframeAttribute = getWireframeIndex( geometry );
wireframes.set( geometry, wireframeAttribute );
}
index = wireframeAttribute;
}
return index;
}
```
Because of drawIndirect, I experimented with a threejs example to find the cause of the error in my app. To do this, I installed a console command in the draw function in each of the 4 possible branches (draw, drawIndirect, drawIndexed, drawIndexedIndirect) in the WebGPUBackend.js to see which path is called.
The error has nothing to do with drawIndirect or drawIndexedIndirect. It's due to wireframe = true, because wireframe = true creates indexes and this forces the indexed draw path. This applies to draw and drawIndirect in the same way. Now that's nothing critical. Since I as a user now that I know this, I can compensate accordingly on the user side. Plus, I don't think anyone else will stumble upon it any time soon. But it would be best if wirefrae = true does not change the draw / drawIndirect path desired by the user to drawIndexed / drawIndexedIndirect, because that is problematic if you don't know how to compensate for it.
### Reproduction steps
1.In WebGPUBackend.js, add an indicator to each draw path console.log("draw"), console.log("drawIndexed"), console.log("drawIndirect"), console.log(" drawIndexedIndirect") to see which branch is chosen by the system.
2. Using my codePen example locally. Once with and once without material.wireframe = true; Let it run and look in the console which draw branch is used in the backend.
3. If a user uses drawIndirect for his unindexed geometry, the draw path change forced by wireframe = true then results in an error.
### Code
see live example
### Live example
](https://codepen.io/Spiri0/pen/zYgzgJR)
### Screenshots
_No response_
### Version
r169
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | WebGPU | low | Critical |
2,601,085,137 | react | [DevTools Bug] Could not find node with id "18006" in commit tree | ### Website or app
npx react-devtools
### Repro steps
clicking on the record button and stop in it produces this error
### How often does this bug happen?
Sometimes
### DevTools package (automated)
react-devtools-core
### DevTools version (automated)
6.0.1-c7c68ef842
### Error message (automated)
Could not find node with id "18006" in commit tree
### Error call stack (automated)
```text
at /Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1033341
at Map.forEach (<anonymous>)
at /Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1033290
at Er.getRankedChartData (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1033791)
at Nf (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1258069)
at di (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:44095)
at ts (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:64373)
at gs (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:75284)
at rc (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:120293)
at Ju (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:120221)
```
### Error component stack (automated)
```text
at Nf (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1257826)
at div (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at ts (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1157861)
at /Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1376165
at Ks (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1173575)
at /Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1176231
at div (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at Ys (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1176065)
at Zc (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1247244)
at Lc (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1239735)
at xt (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1079120)
at ca (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1106654)
at Ec (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1227871)
at Y_ (/Users/macgr/.npm/_npx/8ea6ac5c50576a3b/node_modules/react-devtools-core/dist/standalone.js:2:1382695)
```
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=Could not find node with id in commit tree in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,601,117,368 | tauri | [bug] When my mouse is focused on another monitor , we use primary_monitor , my primary_monitor can't show web-view window | ### Describe the bug
When my mouse is focused on another monitor , we use primary_monitor , my primary_monitor can't show web-view window, but when i focused on primary_monitor , we use primary_monitor, will show web-view window.
this is my code, pls check.
by the way , when my mouse is another monitor or primary_monitor , the information printed is all my primary_monitor.
if let Some(primary_monitor) = app_handle.primary_monitor().unwrap() {
let screen_size = primary_monitor.size();
println!("screen_size: {:?}", screen_size);
println!("primary_monitor: {:?}", primary_monitor);
// screen_size: PhysicalSize { width: 3360, height: 2100 }
// primary_monitor: Monitor { name: Some("Monitor #41032"), size: PhysicalSize { width: 3360, height: 2100 }, position: PhysicalPosition { x: 0, y: 0 }, scale_factor: 2.0 }
let window_size = voice_bubble.inner_size().unwrap();
let margin = 140.0;
let x = screen_size.width as f64 - window_size.width as f64 - margin;
let y = screen_size.height as f64 - window_size.height as f64 - margin;
let primary_position = primary_monitor.position();
voice_bubble
.set_position(tauri::Position::Physical(tauri::PhysicalPosition {
x: (primary_position.x as f64 + x) as i32,
y: (primary_position.y as f64 + y) as i32,
}))
.unwrap();
}
voice_bubble.show().unwrap();
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.6.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.15.0
- pnpm: 7.33.7
- yarn: 1.22.19
- npm: 8.19.4
[-] Packages
- tauri 🦀: 2.0.5
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.46.2
- tao 🦀: 0.30.3
- @tauri-apps/api : 2.0.2 (outdated, latest: 2.0.3)
- @tauri-apps/cli : 2.0.2 (outdated, latest: 2.0.4)
[-] Plugins
- tauri-plugin-http 🦀: 2.0.2
- @tauri-apps/plugin-http : 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-single-instance 🦀: 2.0.1
- @tauri-apps/plugin-single-instance : not installed!
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
- tauri-plugin-fs 🦀: 2.0.2
- @tauri-apps/plugin-fs : 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-store 🦀: 2.1.0
- @tauri-apps/plugin-store : 2.0.0 (outdated, latest: 2.1.0)
- tauri-plugin-dialog 🦀: 2.0.2
- @tauri-apps/plugin-dialog : 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-sql 🦀: 2.0.1
- @tauri-apps/plugin-sql : 2.0.0
- tauri-plugin-notification 🦀: 2.0.1
- @tauri-apps/plugin-notification : 2.0.0
[-] App
- build-type: bundle
- CSP: default-src 'self';
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,601,142,143 | godot | Existence of Static Function Prevents Access of Unrelated Static Variable | ### Tested versions
Reproducible in 4.2stable, 4.3 stable
### System information
Windows 11 - Godot 4.2stable, Godot 4.3 stable
### Issue description
I truly do not know why this is the case, but in my project the very existence of the below code causes a crash when attempting to access an unrelated static variable. This function is not called before the crash occurs.
```gdscript
class_name CardHelpers
...
static func arrow_to_target_k(origin:CardStub, target:PilotButton)->void:
#TODO: Add color and/or texture as arguments
var arrow:TargetArrow = load("res://engine/common/ui_scenes/target_arrow.tscn").instantiate()
origin.add_child(arrow)
arrow.unpack(origin.global_position, target.global_position)
```
The existence of "arrow_to_target_k" causes a crash elsewhere in the project when attempting to access a static variable.
This code:
```gdscript
class_name CardHelpers
....
static func card_by_id(id:String, origin:String)->LogicalCard:
if origin == "pilot":
return PilotCardLib.lib[id]
```
causes the error below:
`Invalid access to property or key 'lib' on a base object of type 'GDScript'.`
### Steps to reproduce
Hit "new game" after running this branch: https://github.com/BluntBSE/ultra_mayor_2/tree/non_crashing_example
to see intended behavior
Hit "new game" after running this branch: https://github.com/BluntBSE/ultra_mayor_2/tree/crashing_example
to see the crash
### Minimal reproduction project (MRP)
Steps to reproduce not exactly known. Crashing/non crashing branches posted above.
_Bugsquad edit:_ Fix codeblock formatting. | topic:gdscript,needs testing | low | Critical |
2,601,182,328 | rust | Codegen depends on let-variable ordering for unclear reasons | https://godbolt.org/z/Wx1rvE6vq
```rust
use std::borrow::Cow;
pub struct Error {
message: Cow<'static, str>,
cause: Option<Box<[u8]>>,
}
#[no_mangle]
pub fn declear_before(v: u64) -> Error {
let s = format!("{v}");
let mut e = Error {
message: Cow::Borrowed(""),
cause: None,
};
e.message = s.into();
e
}
```
Declear `s` before `Error` produces better codegen than declear after.
| A-LLVM,A-codegen,T-compiler,I-heavy,A-panic,C-optimization | low | Critical |
2,601,228,023 | ant-design | 网站 footer 区域的样式在 SSR 时出现样式闪烁 | `import 'rc-footer/assets/index.css';` 这段样式客户端才生效。 | 📝 Documentation,Inactive,unconfirmed | low | Minor |
2,601,231,267 | yt-dlp | [NPR] Tiny Desk Concerts, some urls not working | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
US
### Provide a description that is worded well enough to be understood
Regarding urls found on the page https://www.npr.org/series/tiny-desk-concerts/ ...
This url does not work with yt-dlp ( https://www.npr.org/2024/05/20/1250056328/tiny-desk-concert-bob-james ) nor do similar urls earlier in 2024, going by the date in the url.
As an example, this url does work ( https://www.npr.org/2024/05/24/g-s1-276/tiny-desk-concert-nelly-furtado ) as well as all the urls newer than this on the page all the way to the present.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.npr.org/2024/05/20/1250056328/tiny-desk-concert-bob-james']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.07 from yt-dlp/yt-dlp [1a176d874] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-6.8.0-45-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2024.06.02, mutagen-1.45.1, pyxattr-0.7.2, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.2.1, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.07 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.07 from yt-dlp/yt-dlp)
[Npr] Extracting URL: https://www.npr.org/2024/05/20/1250056328/tiny-desk-concert-bob-james
[Npr] 1250056328: Downloading JSON metadata
ERROR: can only concatenate list (not "dict") to list
Traceback (most recent call last):
File "/home/username/.local/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
File "/home/username/.local/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
File "/home/username/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 741, in extract
ie_result = self._real_extract(url)
File "/home/username/.local/lib/python3.10/site-packages/yt_dlp/extractor/npr.py", line 77, in _real_extract
for media in story.get('audio', []) + story.get('multimedia', []):
TypeError: can only concatenate list (not "dict") to list
```
| site-bug,patch-available | low | Critical |
2,601,258,084 | pytorch | Enable CUDA 12.6 CI/CD , Disable CUDA 12.1 | ### 🚀 The feature, motivation and pitch
Filing issue for enabling CUDA 12.6.2 due to an observed performance gap in PyTorch's Eager mode for plain matrix multiplications.
Confirmed that upgrading PyTorch to CTK 12.6.2 with CUBLAS 12.6.3.3 will boost the performance.
Issue can be used to track related PRs.
Reference - https://github.com/pytorch/pytorch/pull/132202
Docker Images & Windows AMI Update
- [x] https://github.com/pytorch/pytorch/pull/138417
- [x] https://github.com/pytorch/builder/pull/2020
- [x] https://github.com/pytorch/pytorch/pull/138562
- [x] https://github.com/pytorch/pytorch/pull/138563
- [x] https://github.com/pytorch/pytorch/pull/139988
- [x] https://github.com/pytorch/test-infra/pull/5880
- [x] https://github.com/pytorch/test-infra/pull/5924
CD Update
- [x] https://github.com/pytorch/pytorch/pull/138899
- [x] https://github.com/pytorch/builder/pull/2023
- [x] https://github.com/pytorch/builder/pull/2025
- [x] https://github.com/pytorch/pytorch/pull/141433
- [x] https://github.com/pytorch/pytorch/pull/141976
- [x] https://github.com/pytorch/pytorch/pull/127925
- [x] https://github.com/pytorch/builder/pull/2032
- [x] https://github.com/pytorch/builder/pull/2023
- [x] https://github.com/pytorch/test-infra/pull/6002
- [x] https://github.com/pytorch/test-infra/pull/5955
- [x] https://github.com/pytorch/test-infra/pull/5922
- [x] https://github.com/pytorch/pytorch/pull/141805
- [x] https://github.com/pytorch/test-infra/pull/6062
CI Updates
- [x] https://github.com/pytorch/pytorch/pull/141976
- [x] https://github.com/pytorch/pytorch/pull/142335
- [x] https://github.com/pytorch/pytorch/pull/141110
- [x] https://github.com/pytorch/pytorch/pull/141365
- [ ] https://github.com/pytorch/pytorch/pull/140793
CUDA 12.1 Deprecation
- [x] https://github.com/pytorch/test-infra/pull/5904
- [x] https://github.com/pytorch/pytorch/pull/142856
- [x] https://github.com/pytorch/pytorch/pull/143076
- [x] https://github.com/pytorch/pytorch/pull/141271
- [ ] Update Inductor benchmarks for CUDA 12.4
- [ ] https://github.com/pytorch/pytorch/pull/145696
cc @ptrblck @msaroufim @syed-ahmed @nWEIdia @atalman @malfet
### Alternatives
_No response_
### Additional context
_No response_ | module: cuda,triaged | low | Major |
2,601,295,678 | go | proposal: cmd/go: GOTOOLCHAIN=mod to use exact version of toolchain directive | ### Proposal Details
Currently, the local Go runtime determines the actual Go version to use by checking the `toolchain` directive in the `go.mod` file when `GOTOOLCHAIN=auto` is set. If the local Go runtime is newer than the version specified in the `toolchain` directive, it defaults to the local Go version. This behavior aligns with the [Go Toolchain](https://go.dev/doc/toolchain) documentation.
However, in some cases—such as in production environments—it may be necessary to enforce a specific Go runtime version, even though Go maintains strong forward and backward compatibility. For example, if the `toolchain` directive specifies `1.22.8` but the Go runtime in the container is `1.23.2`, the actual version used would be `1.23.2`. This could lead to unexpected issues if the container runtime is updated (even `toolchain` is never changed), which should sometimes be blocked to prevent unexpected factors.
While setting `GOTOOLCHAIN=<version>` is a valid solution but usually this `GOTOOLCHAIN` environment variables might be in disorder in a project; this is against the DRY principle. I believe it would be more convenient to have an option that **always** respects the version specified in the `toolchain` directive as the actual runtime version. This would allow developers to control the Go version for an application simply by modifying the version in the `toolchain` directive.
As a potential solution, how about introducing an option like `GOTOOLCHAIN=mod` to achieve this behavior? | Proposal | low | Major |
2,601,308,220 | awesome-mac | Anything LLM | ### 🪩 Provide a link to the proposed addition
https://anythingllm.com
### 😳 Explain why it should be added
Free Local Language Model application. Run chatGPT type models on your local machine, without needing a subscription.
### 📖 Additional context
_No response_
### 🧨 Issue Checklist
- [X] I have checked for other similar issues
- [X] I have explained why this change is important
- [X] I have added necessary documentation (if appropriate) | addition | low | Minor |
2,601,325,582 | stable-diffusion-webui | [Bug]: Unable to generate ANY image on Intel Arc A770. OneDNN errcode -6,CL_OUT_OF_HOST_MEMORY, RuntimeError: could not create a primitive | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Arc A770 suddenly can not generate any images. This issue also happens in ComfyUI
### Steps to reproduce the problem
1. Launch WebUI
2. Press generate
3. error
### What should have happened?
WebUI should generate the image
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
{
"Platform": "Linux-6.8.0-47-generic-x86_64-with-glibc2.39",
"Python": "3.10.12",
"Version": "v1.10.1-20-g5865da28",
"Commit": "5865da28d1800eea27b3a14979e788b17e010afe",
"Git status": "On branch dev\nYour branch is up to date with 'origin/dev'.\n\nChanges not staged for commit:\n (use \"git add/rm <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n\tdeleted: models/Stable-diffusion/Put Stable Diffusion checkpoints here.txt\n\tmodified: modules/launch_utils.py\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\tbin/\n\tdream_artist/\n\text-off/\n\textension_off/\n\textensions-2/\n\textensions.old/\n\tlib/\n\tlib64\n\tlog.txt\n\tmodules/ui_components.pyi\n\toutput.txt\n\tpickle_inspector.py\n\tpickle_scan.py\n\tpyvenv.cfg\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")",
"Script path": "/home/shouryo/Software/stable-diffusion-webui",
"Data path": "/home/shouryo/Software/stable-diffusion-webui",
"Extensions dir": "/home/shouryo/Software/stable-diffusion-webui/extensions",
"Checksum": "d36e856720f8183454c641e6713bbd87e0539ef3899c4370d813d83aa756bc7f",
"Commandline": [
"webui.py",
"--use-ipex",
"--no-half-vae"
],
"Torch env info": "'NoneType' object has no attribute 'splitlines'",
"Exceptions": [
{
"exception": "could not create a primitive",
"traceback": [
[
"/home/shouryo/Software/stable-diffusion-webui/modules/call_queue.py, line 74, f",
"res = list(func(*args, **kwargs))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/call_queue.py, line 53, f",
"res = func(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/call_queue.py, line 37, f",
"res = func(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/processing.py, line 847, process_images",
"res = process_images_inner(p)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/processing.py, line 988, process_images_inner",
"samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/processing.py, line 1346, sample",
"samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py, line 230, sample",
"samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_common.py, line 272, launch_sampling",
"return func()"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py, line 230, <lambda>",
"samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py, line 115, decorate_context",
"return func(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py, line 594, sample_dpmpp_2m",
"denoised = model(x, sigmas[i] * s_in, **extra_args)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py, line 249, forward",
"x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py, line 112, forward",
"eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py, line 138, get_eps",
"return self.inner_model.apply_model(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py, line 34, __call__",
"return self.__sub_func(self.__orig_func, *args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_unet.py, line 50, apply_model",
"result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py, line 36, __call__",
"return self.__orig_func(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py, line 858, apply_model",
"x_recon = self.model(x_noisy, t, **cond)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py, line 1335, forward",
"out = self.diffusion_model(x, t, context=cc)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/modules/sd_unet.py, line 91, UNetModel_forward",
"return original_forward(self, x, timesteps, context, *args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py, line 797, forward",
"h = module(h, emb, context)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py, line 82, forward",
"x = layer(x, emb)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1532, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py, line 1541, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py, line 249, forward",
"return checkpoint("
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py, line 123, checkpoint",
"return func(*inputs)"
],
[
"/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py, line 272, _forward",
"h = h + emb_out"
]
]
}
],
"CPU": {
"model": "x86_64",
"count logical": 24,
"count physical": 12
},
"RAM": {
"total": "126GB",
"used": "4GB",
"free": "101GB",
"active": "17GB",
"inactive": "6GB",
"buffers": "1GB",
"cached": "19GB",
"shared": "386MB"
},
"Extensions": [
{
"name": "a1111-sd-webui-tagcomplete",
"path": "/home/shouryo/Software/stable-diffusion-webui/extensions/a1111-sd-webui-tagcomplete",
"commit": "49ec047af8ba73889f65a65585ef16b4a26b416b",
"branch": "main",
"remote": "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git"
},
{
"name": "sd-civitai-browser-plus",
"path": "/home/shouryo/Software/stable-diffusion-webui/extensions/sd-civitai-browser-plus",
"commit": "0b97a482ad4e3c0fec797a797414a0f0eeef08fa",
"branch": "main",
"remote": "https://github.com/BlafKing/sd-civitai-browser-plus.git"
},
{
"name": "sd_dreambooth_extension",
"path": "/home/shouryo/Software/stable-diffusion-webui/extensions/sd_dreambooth_extension",
"commit": "1b3257b46bb03c6de3bcdfa079773dc040884fbd",
"branch": "main",
"remote": "https://github.com/d8ahazard/sd_dreambooth_extension.git"
}
],
"Inactive extensions": [],
"Environment": {
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000,
"img_max_size_mp": 200,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": true,
"notification_volume": 100,
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/txt2img-images",
"outdir_img2img_samples": "outputs/img2img-images",
"outdir_extras_samples": "outputs/extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/txt2img-grids",
"outdir_img2img_grids": "outputs/img2img-grids",
"outdir_save": "log/images",
"outdir_init_images": "outputs/init-images",
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"upscaler_for_img2img": null,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"auto_launch_browser": "Local",
"enable_console_prompts": false,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120,
"sd_model_checkpoint": "<omitted>",
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"enable_emphasis": true,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 2,
"upcast_attn": true,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"sdxl_crop_top": 0,
"sdxl_crop_left": 0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "Automatic",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision": false,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_extra_noise": 0.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0.0,
"token_merging_ratio": 0.0,
"token_merging_ratio_img2img": 0.0,
"token_merging_ratio_hr": 0.0,
"pad_cond_uncond": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1.0,
"extra_networks_card_width": 0,
"extra_networks_card_height": 0,
"extra_networks_card_text_scale": 1.0,
"extra_networks_card_show_desc": true,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"keyedit_move": true,
"disable_token_counters": false,
"return_grid": true,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250,
"gallery_height": "",
"compact_prompt_box": false,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint"
],
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000,
"live_preview_fast_interrupt": false,
"js_live_preview_in_modal_lightbox": false,
"hide_samplers": [],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_tmax": 0.0,
"s_noise": 1.0,
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"disabled_extensions": [],
"disable_all_extensions": "none",
"restore_config_state_file": "",
"sd_checkpoint_hash": "c9361308343e125339585c482042d88698c8552334daadfe56f09bd624b9f0a8",
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_show_all": true,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"lora_functional": false,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
],
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"tac_tagFile": "danbooru.csv",
"tac_active": true,
"tac_activeIn.txt2img": true,
"tac_activeIn.img2img": true,
"tac_activeIn.negativePrompts": true,
"tac_activeIn.thirdParty": true,
"tac_activeIn.modelList": "",
"tac_activeIn.modelListMode": "Blacklist",
"tac_slidingPopup": true,
"tac_maxResults": 5,
"tac_showAllResults": false,
"tac_resultStepLength": 100,
"tac_delayTime": 100,
"tac_useWildcards": true,
"tac_sortWildcardResults": true,
"tac_useEmbeddings": true,
"tac_includeEmbeddingsInNormalResults": false,
"tac_useHypernetworks": true,
"tac_useLoras": true,
"tac_useLycos": true,
"tac_showWikiLinks": false,
"tac_showExtraNetworkPreviews": true,
"tac_modelSortOrder": "Name",
"tac_replaceUnderscores": true,
"tac_escapeParentheses": true,
"tac_appendComma": true,
"tac_appendSpace": true,
"tac_alwaysSpaceAtEnd": true,
"tac_modelKeywordCompletion": "Never",
"tac_modelKeywordLocation": "Start of prompt",
"tac_wildcardCompletionMode": "To next folder level",
"tac_alias.searchByAlias": true,
"tac_alias.onlyShowAlias": false,
"tac_translation.translationFile": "None",
"tac_translation.oldFormat": false,
"tac_translation.searchByTranslation": true,
"tac_translation.liveTranslation": false,
"tac_extra.extraFile": "extra-quality-tags.csv",
"tac_extra.addMode": "Insert before",
"tac_chantFile": "demo-chants.json",
"tac_keymap": "{\n \"MoveUp\": \"ArrowUp\",\n \"MoveDown\": \"ArrowDown\",\n \"JumpUp\": \"PageUp\",\n \"JumpDown\": \"PageDown\",\n \"JumpToStart\": \"Home\",\n \"JumpToEnd\": \"End\",\n \"ChooseSelected\": \"Enter\",\n \"ChooseFirstOrSelected\": \"Tab\",\n \"Close\": \"Escape\"\n}",
"tac_colormap": "{\n \"danbooru\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"indianred\", \"firebrick\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"orange\", \"darkorange\"]\n },\n \"e621\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"gold\", \"goldenrod\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"tomato\", \"darksalmon\"],\n \"6\": [\"red\", \"maroon\"],\n \"7\": [\"whitesmoke\", \"black\"],\n \"8\": [\"seagreen\", \"darkseagreen\"]\n }\n}",
"tac_refreshTempFiles": "Refresh TAC temp files",
"ad_max_models": 2,
"ad_extra_models_dir": "",
"ad_save_previews": false,
"ad_save_images_before": false,
"ad_only_seleted_scripts": true,
"ad_script_names": "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards,lora_block_weight",
"ad_bbox_sortby": "None",
"ad_same_seed_for_each_tap": false,
"civsfz_number_of_tabs": 3,
"civsfz_number_of_cards": 12,
"civsfz_card_size_width": 8,
"civsfz_card_size_height": 12,
"civsfz_hover_zoom_magnification": 1.5,
"civsfz_treat_x_as_nsfw": true,
"civsfz_figcaption_background_color": "#798a9f",
"civsfz_default_shadow_color": "#798a9f",
"civsfz_alreadyhave_shadow_color": "#7fffd4",
"control_net_detectedmap_dir": "detected_maps",
"control_net_models_path": "",
"control_net_modules_path": "",
"control_net_unit_count": 3,
"control_net_model_cache_size": 1,
"control_net_inpaint_blur_sigma": 7,
"control_net_no_high_res_fix": false,
"control_net_no_detectmap": false,
"control_net_detectmap_autosaving": false,
"control_net_allow_script_control": false,
"control_net_sync_field_args": true,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"controlnet_disable_control_type": false,
"controlnet_disable_openpose_edit": false,
"controlnet_ignore_noninpaint_mask": false,
"use_aria2": true,
"disable_dns": false,
"show_log": false,
"split_aria2": 64,
"aria2_flags": "",
"unpack_zip": false,
"save_api_info": false,
"auto_save_all_img": false,
"custom_api_key": "",
"hide_early_access": true,
"use_LORA": true,
"dot_subfolders": true,
"use_local_html": false,
"local_path_in_html": false,
"page_header": false,
"video_playback": true,
"individual_meta_btn": true,
"model_desc_to_json": true,
"image_location": "",
"sub_image_location": true,
"save_to_custom": false,
"custom_civitai_proxy": "",
"cabundle_path_proxy": "",
"disable_sll_proxy": false,
"insert_sub_1": false,
"insert_sub_2": false,
"insert_sub_3": false,
"insert_sub_4": false,
"insert_sub_5": false,
"insert_sub_6": false,
"insert_sub_7": false,
"insert_sub_8": false,
"insert_sub_9": false,
"insert_sub_10": false,
"insert_sub_11": false,
"insert_sub_12": false,
"insert_sub_13": false,
"insert_sub_14": false,
"Checkpoint_subfolder": "None",
"LORA_LoCon_subfolder": "None",
"TextualInversion_subfolder": "None",
"Poses_subfolder": "None",
"Controlnet_subfolder": "None",
"Hypernetwork_subfolder": "None",
"MotionModule_subfolder": "None",
"SWINIR_upscale_subfolder": "None",
"REALESRGAN_upscale_subfolder": "None",
"GFPGAN_upscale_subfolder": "None",
"BSRGAN_upscale_subfolder": "None",
"ESRGAN_upscale_subfolder": "None",
"VAE_subfolder": "None",
"AestheticGradient_subfolder": "None",
"Wildcards_subfolder": "None",
"Workflows_subfolder": "None",
"Other_subfolder": "None",
"tac_wildcardExclusionList": "",
"tac_skipWildcardRefresh": false,
"tac_useLoraPrefixForLycos": true,
"tac_useStyleVars": false,
"tac_frequencySort": true,
"tac_frequencyFunction": "Logarithmic (weak)",
"tac_frequencyMinCount": 3,
"tac_frequencyMaxAge": 30,
"tac_frequencyRecommendCap": 10,
"tac_frequencyIncludeAlias": false,
"civitai_not_found_print": true,
"civitai_send_to_browser": false,
"auto_backcompat": true,
"use_downcasted_alpha_bar": false,
"refiner_switch_by_sample_steps": false,
"extra_networks_card_description_is_html": false,
"extra_networks_tree_view_style": "Dirs",
"extra_networks_tree_view_default_enabled": true,
"extra_networks_tree_view_default_width": 180.0,
"lora_not_found_warning_console": false,
"lora_not_found_gradio_warning": false,
"pad_cond_uncond_v0": false,
"fp8_storage": "Disable",
"cache_fp16_weight": false,
"sd_noise_schedule": "Default",
"emphasis": "Original",
"enable_prompt_comments": true,
"auto_vae_precision_bfloat16": true,
"overlay_inpaint": true,
"sd_webui_modal_lightbox_icon_opacity": 1,
"sd_webui_modal_lightbox_toolbar_opacity": 0.9,
"open_dir_button_choice": "Subdirectory",
"include_styles_into_token_counters": true,
"interrupt_after_current": true,
"enable_reloading_ui_scripts": false,
"prioritized_callbacks_app_started": [],
"prioritized_callbacks_model_loaded": [],
"prioritized_callbacks_ui_tabs": [],
"prioritized_callbacks_ui_settings": [],
"prioritized_callbacks_infotext_pasted": [],
"prioritized_callbacks_script_unloaded": [],
"prioritized_callbacks_before_ui": [],
"prioritized_callbacks_list_optimizers": [],
"prioritized_callbacks_before_token_counter": [],
"prioritized_callbacks_script_before_process": [],
"prioritized_callbacks_script_process": [],
"prioritized_callbacks_script_post_sample": [],
"prioritized_callbacks_script_on_mask_blend": [],
"prioritized_callbacks_script_postprocess_maskoverlay": [],
"enable_upscale_progressbar": true,
"postprocessing_disable_in_extras": [],
"dat_enabled_models": [
"DAT x2",
"DAT x3",
"DAT x4"
],
"DAT_tile": 192,
"DAT_tile_overlap": 8,
"set_scale_by_when_changing_upscaler": false,
"canvas_hotkey_shrink_brush": "Q",
"canvas_hotkey_grow_brush": "W",
"lora_bundled_ti_to_infotext": true,
"s_min_uncond_all": false,
"skip_early_cond": 0,
"sdxl_clip_l_skip": false,
"prevent_screen_sleep_during_generation": true,
"profiling_enable": false,
"profiling_activities": [
"CPU"
],
"profiling_record_shapes": true,
"profiling_profile_memory": true,
"profiling_with_stack": true,
"profiling_filename": "trace.json",
"save_write_log_csv": true,
"beta_dist_alpha": 0.6,
"beta_dist_beta": 0.6,
"sd3_enable_t5": false
},
"Startup": {
"total": 133.00689578056335,
"records": {
"launcher": 0.03735041618347168,
"import torch": 31.34335422515869,
"import gradio": 8.4069082736969,
"setup paths": 11.82895278930664,
"import ldm": 0.09105134010314941,
"import sgm": 8.106231689453125e-06,
"initialize shared": 62.97803854942322,
"other imports": 6.41315770149231,
"opts onchange": 0.00038123130798339844,
"setup SD model": 6.651878356933594e-05,
"setup codeformer": 0.005723714828491211,
"setup gfpgan": 0.12494778633117676,
"set samplers": 3.123283386230469e-05,
"list extensions": 0.0015416145324707031,
"restore config state file": 1.6450881958007812e-05,
"list SD models": 0.2739830017089844,
"list localizations": 0.0002911090850830078,
"load scripts/custom_code.py": 0.004487752914428711,
"load scripts/img2imgalt.py": 0.005002021789550781,
"load scripts/loopback.py": 0.004810333251953125,
"load scripts/outpainting_mk_2.py": 0.0004413127899169922,
"load scripts/poor_mans_outpainting.py": 0.004701375961303711,
"load scripts/postprocessing_codeformer.py": 0.0048291683197021484,
"load scripts/postprocessing_gfpgan.py": 0.0003445148468017578,
"load scripts/postprocessing_upscale.py": 0.004990339279174805,
"load scripts/prompt_matrix.py": 0.004759073257446289,
"load scripts/prompts_from_file.py": 0.005009651184082031,
"load scripts/sd_upscale.py": 0.00489044189453125,
"load scripts/xyz_grid.py": 0.01428675651550293,
"load scripts/ldsr_model.py": 0.13598418235778809,
"load scripts/lora_script.py": 0.3503880500793457,
"load scripts/scunet_model.py": 0.03040766716003418,
"load scripts/swinir_model.py": 0.029491424560546875,
"load scripts/hotkey_config.py": 0.00028705596923828125,
"load scripts/extra_options_section.py": 0.009782552719116211,
"load scripts/hypertile_script.py": 0.05941581726074219,
"load scripts/postprocessing_autosized_crop.py": 0.0007984638214111328,
"load scripts/postprocessing_caption.py": 0.004887819290161133,
"load scripts/postprocessing_create_flipped_copies.py": 0.00508880615234375,
"load scripts/postprocessing_focal_crop.py": 0.010241508483886719,
"load scripts/postprocessing_split_oversized.py": 0.0047228336334228516,
"load scripts/soft_inpainting.py": 0.010071277618408203,
"load scripts/model_keyword_support.py": 0.02492523193359375,
"load scripts/shared_paths.py": 0.0003371238708496094,
"load scripts/tag_autocomplete_helper.py": 0.11105465888977051,
"load scripts/tag_frequency_db.py": 0.0001049041748046875,
"load scripts/civitai_api.py": 1.20656156539917,
"load scripts/civitai_download.py": 1.0030999183654785,
"load scripts/civitai_file_manage.py": 0.0008542537689208984,
"load scripts/civitai_global.py": 0.00012874603271484375,
"load scripts/civitai_gui.py": 0.11642813682556152,
"load scripts/__init__.py": 0.0142822265625,
"load scripts/api.py": 1.51882004737854,
"load scripts/main.py": 0.22132420539855957,
"load scripts/comments.py": 0.04971051216125488,
"load scripts/refiner.py": 0.0024454593658447266,
"load scripts/sampler.py": 0.009913444519042969,
"load scripts/seed.py": 0.0052030086517333984,
"load scripts": 4.9953649044036865,
"load upscalers": 0.049864768981933594,
"refresh VAE": 0.0015914440155029297,
"refresh textual inversion templates": 5.4836273193359375e-05,
"scripts list_optimizers": 0.00038552284240722656,
"scripts list_unets": 2.9087066650390625e-05,
"reload hypernetworks": 0.0005502700805664062,
"initialize extra networks": 0.3288452625274658,
"scripts before_ui_callback": 0.045049428939819336,
"create ui": 5.700387239456177,
"gradio launch": 0.3683772087097168,
"add APIs": 0.007654666900634766,
"app_started_callback/lora_script.py": 0.00020813941955566406,
"app_started_callback/tag_autocomplete_helper.py": 0.0027456283569335938,
"app_started_callback": 0.002957582473754883
}
},
"Packages": [
"GitPython==3.1.43",
"Jinja2==3.1.4",
"Markdown==3.7",
"MarkupSafe==2.1.5",
"Pillow==9.5.0",
"PySocks==1.7.1",
"PyWavelets==1.7.0",
"PyYAML==6.0.2",
"Send2Trash==1.8.3",
"Werkzeug==3.0.3",
"ZipUnicode==1.1.1",
"absl-py==2.1.0",
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohappyeyeballs==2.3.6",
"aiohttp==3.10.3",
"aiosignal==1.3.1",
"altair==5.4.0",
"annotated-types==0.7.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==24.2.0",
"beautifulsoup4==4.12.3",
"bitsandbytes==0.43.3",
"blendmodes==2022",
"cachetools==5.4.0",
"certifi==2024.7.4",
"chardet==5.2.0",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"contourpy==1.2.1",
"cycler==0.12.1",
"dadaptation==3.2",
"deprecation==2.1.0",
"diffusers==0.30.0",
"discord-webhook==1.3.0",
"diskcache==5.6.3",
"dpcpp-cpp-rt==2024.2.1",
"einops==0.4.1",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fake-useragent==1.5.1",
"fastapi==0.94.0",
"ffmpy==0.4.0",
"filelock==3.15.4",
"filterpy==1.4.5",
"fonttools==4.53.1",
"frozenlist==1.4.1",
"fsspec==2024.6.1",
"ftfy==6.2.3",
"gitdb==4.0.11",
"google-auth-oauthlib==1.0.0",
"google-auth==2.33.0",
"gradio==3.41.2",
"gradio_client==0.5.0",
"grpcio==1.65.4",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.24.5",
"idna==3.7",
"imageio==2.35.0",
"impi-devel==2021.13.1",
"impi-rt==2021.13.1",
"importlib_metadata==8.2.0",
"importlib_resources==6.4.2",
"inflection==0.5.1",
"intel-cmplr-lib-rt==2024.2.1",
"intel-cmplr-lib-ur==2024.2.1",
"intel-cmplr-lic-rt==2024.2.1",
"intel-opencl-rt==2024.2.1",
"intel-openmp==2024.2.1",
"intel-sycl-rt==2024.2.1",
"intel_extension_for_pytorch==2.3.110+xpu",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.12.1",
"jsonschema==4.23.0",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"lightning-utilities==0.11.6",
"llvmlite==0.43.0",
"matplotlib==3.9.2",
"mkl-dpcpp==2024.2.1",
"mkl==2024.2.1",
"mpmath==1.3.0",
"multidict==6.0.5",
"narwhals==1.4.2",
"networkx==3.3",
"numba==0.60.0",
"numpy==1.26.4",
"oauthlib==3.2.2",
"omegaconf==2.2.3",
"oneccl-bind-pt==2.3.100+xpu",
"oneccl-devel==2021.13.1",
"onemkl-sycl-blas==2024.2.1",
"onemkl-sycl-datafitting==2024.2.1",
"onemkl-sycl-dft==2024.2.1",
"onemkl-sycl-lapack==2024.2.1",
"onemkl-sycl-rng==2024.2.1",
"onemkl-sycl-sparse==2024.2.1",
"onemkl-sycl-stats==2024.2.1",
"onemkl-sycl-vm==2024.2.1",
"open-clip-torch==2.20.0",
"opencv-python==4.10.0.84",
"orjson==3.10.7",
"packaging==24.1",
"pandas==2.2.2",
"piexif==1.1.3",
"pillow-avif-plugin==1.4.3",
"pip==22.0.2",
"protobuf==3.20.0",
"psutil==5.9.5",
"pyasn1==0.6.0",
"pyasn1_modules==0.4.0",
"pydantic==1.10.17",
"pydantic_core==2.20.1",
"pydub==0.25.1",
"pyparsing==3.1.2",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytorch_optimizer==2.12.0",
"pytz==2024.1",
"referencing==0.35.1",
"regex==2024.7.24",
"requests-oauthlib==2.0.0",
"requests==2.32.3",
"resize-right==0.0.2",
"rpds-py==0.20.0",
"rsa==4.9",
"ruamel.yaml.clib==0.2.8",
"ruamel.yaml==0.18.6",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.14.0",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"soupsieve==2.6",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"starlette==0.26.1",
"sympy==1.13.2",
"tbb==2021.13.1",
"tensorboard-data-server==0.7.2",
"tensorboard==2.13.0",
"tifffile==2024.8.10",
"timm==1.0.8",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"torch==2.3.1+cxx11.abi",
"torchaudio==2.3.1+cxx11.abi",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.1",
"torchsde==0.2.6",
"torchvision==0.18.1+cxx11.abi",
"tqdm==4.66.5",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing_extensions==4.12.2",
"tzdata==2024.1",
"urllib3==2.2.2",
"uvicorn==0.30.6",
"wcwidth==0.2.13",
"websockets==11.0.3",
"wheel==0.44.0",
"yarl==1.9.4",
"zipp==3.20.0"
]
}
### Console logs
```Shell
:: initializing oneAPI environment ...
start_sd_ipex.sh: BASH_VERSION = 5.2.21(1)-release
args: Using "$@" for setvars.sh arguments:
:: ccl -- latest
:: compiler -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dpl -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: oneAPI environment initialized ::
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
[2024-10-21 00:46:48,621][DEBUG][git.cmd] - Popen(['git', 'version'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=None, shell=False, universal_newlines=False)
[2024-10-21 00:46:48,624][DEBUG][git.cmd] - Popen(['git', 'version'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=None, shell=False, universal_newlines=False)
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
CivitAI Browser+: Aria2 RPC restarted
CivitAI Browser+: Aria2 RPC restarted
[2024-10-21 00:48:02,420][DEBUG][api.py] - API flag not enabled, skipping API layer. Please enable with --api
Loading weights [c936130834] from <omitted>
Creating model from config: /home/shouryo/Software/stable-diffusion-webui/configs/v1-inference.yaml
/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 133.0s (import torch: 31.3s, import gradio: 8.4s, setup paths: 11.8s, initialize shared: 63.0s, other imports: 6.4s, setup gfpgan: 0.1s, list SD models: 0.3s, load scripts: 5.0s, initialize extra networks: 0.3s, create ui: 5.7s, gradio launch: 0.4s).
Advanced elements visible: False
[2024-10-21 00:48:11,906][DEBUG][git.cmd] - Popen(['git', 'remote', 'get-url', '--all', 'origin'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=None, shell=False, universal_newlines=False)
[2024-10-21 00:48:11,909][DEBUG][git.cmd] - Popen(['git', 'cat-file', '--batch-check'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=<valid stream>, shell=False, universal_newlines=False)
[2024-10-21 00:48:11,911][DEBUG][git.cmd] - Popen(['git', 'cat-file', '--batch'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=<valid stream>, shell=False, universal_newlines=False)
[2024-10-21 00:48:11,914][DEBUG][git.cmd] - Popen(['git', 'remote', 'get-url', '--all', 'origin'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=None, shell=False, universal_newlines=False)
[2024-10-21 00:48:11,916][DEBUG][git.cmd] - Popen(['git', 'cat-file', '--batch-check'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=<valid stream>, shell=False, universal_newlines=False)
[2024-10-21 00:48:11,918][DEBUG][git.cmd] - Popen(['git', 'cat-file', '--batch'], cwd=/home/shouryo/Software/stable-diffusion-webui, stdin=<valid stream>, shell=False, universal_newlines=False)
[2024-10-21 00:48:12,434][INFO][modules.shared_state] - Starting job task(7zb94xpevv0u6ww)
Applying attention optimization: InvokeAI... done.
Model loaded in 50.7s (load weights from disk: 4.9s, create model: 0.8s, apply weights to model: 37.7s, apply dtype to VAE: 0.5s, move model to device: 0.1s, load textual inversion embeddings: 1.6s, calculate empty prompt: 4.9s).
0%| | 0/20 [00:00<?, ?it/s]onednn_verbose,info,oneDNN v3.5.3 (commit abbc771ddb7735e22309b75151c87ea9d48b620a)
onednn_verbose,info,cpu,runtime:threadpool,nthr:12
onednn_verbose,info,cpu,isa:Intel AVX2
onednn_verbose,info,gpu,runtime:DPC++
onednn_verbose,info,gpu,engine,0,backend:Level Zero,name:Intel(R) Arc(TM) A770 Graphics,driver_version:1.3.29735,binary_kernels:enabled
onednn_verbose,info,gpu,engine,1,backend:Level Zero,name:Intel(R) Arc(TM) A770 Graphics,driver_version:1.3.29735,binary_kernels:enabled
onednn_verbose,info,graph,backend,0:dnnl_backend
onednn_verbose,info,experimental features are enabled
onednn_verbose,info,use batch_normalization stats one pass is enabled
onednn_verbose,primitive,info,template:operation,engine,primitive,implementation,prop_kind,memory_descriptors,attributes,auxiliary,problem_desc,exec_time
onednn_verbose,graph,info,template:operation,engine,partition_id,partition_kind,op_names,data_formats,logical_tensors,fpmath_mode,backend,exec_time
onednn_verbose,common,error,ocl,Error during the build of OpenCL program. Build log:
onednn_verbose,primitive,error,ocl,errcode -6,CL_OUT_OF_HOST_MEMORY,src/gpu/intel/ocl/ocl_gpu_engine.cpp:261
0%| | 0/20 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: ('task(7zb94xpevv0u6ww)', <gradio.routes.Request object at 0x7d0f304b9ba0>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/shouryo/Software/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/home/shouryo/Software/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/home/shouryo/Software/stable-diffusion-webui/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
File "/home/shouryo/Software/stable-diffusion-webui/modules/processing.py", line 988, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "/home/shouryo/Software/stable-diffusion-webui/modules/processing.py", line 1346, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 230, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
return func()
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 230, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 249, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 82, in forward
x = layer(x, emb)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
return checkpoint(
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 123, in checkpoint
return func(*inputs)
File "/home/shouryo/Software/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 272, in _forward
h = h + emb_out
RuntimeError: could not create a primitive
```
### Additional information
_No response_ | bug-report | low | Critical |
2,601,331,881 | ui | [bug]: Next.js init doesn't work | ### Describe the bug
creating new project with Next.js is broken
just following instructions here: https://ui.shadcn.com/docs/installation/next
```
✔ What is your project named? … client
✔ Creating a new Next.js project.
✔ Writing components.json.
✔ Checking registry.
⠋ Updating tailwind.config.js
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
ENOENT: no such file or directory, open '/.../client/tailwind.config.js'
```
### Affected component/components
project
### How to reproduce
1. open terminal
2. run `npx shadcn@latest init -d`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macOS 14.1 / npm 10.9.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,601,447,935 | godot | light_only in canvas_item doesn't take LIGHT_COLOR in to consideration | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable - macOS 15.0.1 - Vulkan (Mobile) - integrated Apple M2 Max - Apple M2 Max (12 Threads)
### Issue description
https://github.com/user-attachments/assets/51c0f1fd-3d50-4120-a3b4-305dfe6ec401
The light_only render mode only hides the sprite outside the PointLight2D area. It does not hide it inside the node where no light shines, which is what I would expect. See video attachement
If I misunderstand the render mode then please let me know and I'll close this ticket.
```
shader_type canvas_item;
render_mode light_only;
```
With the light function it's not possible to discard the sprites RGB colors based on LIGHT_COLOR, because we cannot write to the COLOR variable in the light function. And I would expect the render pipeline deal with that.
### Steps to reproduce
1. setup a PointLight2D with a light texture that does not completely cover the 2D node
2. move a sprite with a CanvasMaterial => light only or a shader with render mode light_only over the PointLight2D
### Minimal reproduction project (MRP)
It happens with a new project. You can use this texture as light source

Godot svg can be used as node which needs the light_only render mode or CanvasItemMaterial | bug,topic:rendering,topic:2d | low | Minor |
2,601,482,034 | PowerToys | [FancyZones] Snapping unsnapped windows using Win+left/right snaps in the opposite direction | ### Microsoft PowerToys version
0.85.0
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
FancyZones
### Steps to reproduce
1. Enable fancy zones with some layout.
2. Make sure that Override Windows Snap is set.
3. Open a window not fitting any zone.
4. Use Win key + left or Win key + right to snap to a zone.
### ✔️ Expected Behavior
When using Win key + left the window should snap to the leftmost zone, and when using Win key + right the window should snap to the rightmost zone.
### ❌ Actual Behavior
When using Win key + left the window snaps to the rightmost zone, and when using Win key + right the window snaps to the leftmost zone.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,601,616,268 | ui | [feat]: set swipe direction for toast depending on screen size | ### Feature description
Currently, the Toast component is set to use swipeDirection="right" on all screen sizes. This leads to inconsistent behaviour:
Large screen (sm+) - Everything OK:
- Toast is displayed in bottom right corner
- Toast exits to the right
- Toast can be swiped to the right
Small screen - Inconsistent
- Toast is displayed in top
- _Toast exits to the right_
- _Toast can be swiped to the right_
I think in this case the toast should instead:
- Exit to the top
- Be swiped to the top to close
That would more closely mimic how e.g. notifications on mobile devices are handled (at least on iOS you swipe them up to dismiss them).
While exiting to the top can easily be set with a few css class adjustments, the swipeDirection is set on the Provider and can not be changed this easily (as far as I can see).
### Affected component/components
Toast
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,601,708,235 | langchain | StructuredTool.from_function does not support functools.partial | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def test_structured_tool_from_function_partial() -> None:
"""Test that structured tools can be created from a partial function."""
def func(bar: str, baz: str=None) -> str:
"""Docstring
Args:
bar: str
baz: str
"""
return bar + baz
structured_tool = StructuredTool.from_function(
name="tool",
description="A tool",
func=partial(func, baz="foo"),
)
assert structured_tool.invoke({"bar": "bar"}) == "barfoo"
```
### Error Message and Stack Trace (if applicable)
```
tests/unit_tests/test_tools.py:484 (test_structured_tool_from_function_partial)
def test_structured_tool_from_function_partial() -> None:
"""Test that structured tools can be created from a partial function."""
def func(bar: str, baz: str=None) -> str:
"""Docstring
Args:
bar: str
baz: str
"""
return bar + baz
> structured_tool = StructuredTool.from_function(
name="tool",
description="A tool",
func=partial(func, baz="foo"),
)
test_tools.py:497:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../langchain_core/tools/structured.py:181: in from_function
args_schema = create_schema_from_function(
../../langchain_core/tools/base.py:249: in create_schema_from_function
validated = validate_arguments(func, config=_SchemaConfig) # type: ignore
../../../../venv/lib/python3.11/site-packages/pydantic/deprecated/decorator.py:64: in validate_arguments
return validate(func)
../../../../venv/lib/python3.11/site-packages/pydantic/deprecated/decorator.py:51: in validate
vd = ValidatedFunction(_func, config)
../../../../venv/lib/python3.11/site-packages/pydantic/deprecated/decorator.py:94: in __init__
type_hints = _typing_extra.get_type_hints(function, include_extras=True)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = functools.partial(<function test_structured_tool_from_function_partial.<locals>.func at 0x119bd04a0>, baz='foo')
globalns = {}, localns = {}, include_extras = True
def get_type_hints(obj, globalns=None, localns=None, include_extras=False):
"""Return type hints for an object.
This is often the same as obj.__annotations__, but it handles
forward references encoded as string literals and recursively replaces all
'Annotated[T, ...]' with 'T' (unless 'include_extras=True').
The argument may be a module, class, method, or function. The annotations
are returned as a dictionary. For classes, annotations include also
inherited members.
TypeError is raised if the argument is not of a type that can contain
annotations, and an empty dictionary is returned if no annotations are
present.
BEWARE -- the behavior of globalns and localns is counterintuitive
(unless you are familiar with how eval() and exec() work). The
search order is locals first, then globals.
- If no dict arguments are passed, an attempt is made to use the
globals from obj (or the respective module's globals for classes),
and these are also used as the locals. If the object does not appear
to have globals, an empty dictionary is used. For classes, the search
order is globals first then locals.
- If one dict argument is passed, it is used for both globals and
locals.
- If two dict arguments are passed, they specify globals and
locals, respectively.
"""
if getattr(obj, '__no_type_check__', None):
return {}
# Classes require a special treatment.
if isinstance(obj, type):
hints = {}
for base in reversed(obj.__mro__):
if globalns is None:
base_globals = getattr(sys.modules.get(base.__module__, None), '__dict__', {})
else:
base_globals = globalns
ann = base.__dict__.get('__annotations__', {})
if isinstance(ann, types.GetSetDescriptorType):
ann = {}
base_locals = dict(vars(base)) if localns is None else localns
if localns is None and globalns is None:
# This is surprising, but required. Before Python 3.10,
# get_type_hints only evaluated the globalns of
# a class. To maintain backwards compatibility, we reverse
# the globalns and localns order so that eval() looks into
# *base_globals* first rather than *base_locals*.
# This only affects ForwardRefs.
base_globals, base_locals = base_locals, base_globals
for name, value in ann.items():
if value is None:
value = type(None)
if isinstance(value, str):
value = ForwardRef(value, is_argument=False, is_class=True)
value = _eval_type(value, base_globals, base_locals)
hints[name] = value
return hints if include_extras else {k: _strip_annotations(t) for k, t in hints.items()}
if globalns is None:
if isinstance(obj, types.ModuleType):
globalns = obj.__dict__
else:
nsobj = obj
# Find globalns for the unwrapped object.
while hasattr(nsobj, '__wrapped__'):
nsobj = nsobj.__wrapped__
globalns = getattr(nsobj, '__globals__', {})
if localns is None:
localns = globalns
elif localns is None:
localns = globalns
hints = getattr(obj, '__annotations__', None)
if hints is None:
# Return empty annotations for something that _could_ have them.
if isinstance(obj, _allowed_types):
return {}
else:
> raise TypeError('{!r} is not a module, class, method, '
'or function.'.format(obj))
E TypeError: functools.partial(<function test_structured_tool_from_function_partial.<locals>.func at 0x119bd04a0>, baz='foo') is not a module, class, method, or function.
/Users/xxx/.pyenv/versions/3.11.4/lib/python3.11/typing.py:2359: TypeError
```
### Description
A value to source_function must be assigned based on the type of the func object in the StructuredTool.from_function method.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.4 (main, Sep 6 2023, 14:18:32) [Clang 13.1.6 (clang-1316.0.21.2.5)]
Package Information
-------------------
> langchain_core: 0.3.11
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.128
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.6
> aiosqlite: Installed. No version info available.
> aleph-alpha-client: Installed. No version info available.
> anthropic: 0.3.11
> arxiv: Installed. No version info available.
> assemblyai: Installed. No version info available.
> async-timeout: Installed. No version info available.
> atlassian-python-api: Installed. No version info available.
> azure-ai-documentintelligence: Installed. No version info available.
> azure-ai-formrecognizer: Installed. No version info available.
> azure-ai-textanalytics: Installed. No version info available.
> azure-cognitiveservices-speech: Installed. No version info available.
> azure-core: Installed. No version info available.
> azure-cosmos: Installed. No version info available.
> azure-identity: Installed. No version info available.
> azure-search-documents: Installed. No version info available.
> beautifulsoup4: 4.12.3
> bibtexparser: Installed. No version info available.
> cassio: 0.1.6
> chardet: Installed. No version info available.
> clarifai: Installed. No version info available.
> cloudpickle: Installed. No version info available.
> cohere: Installed. No version info available.
> couchbase: Installed. No version info available.
> dashvector: Installed. No version info available.
> databricks-vectorsearch: Installed. No version info available.
> dataclasses-json: 0.6.4
> datasets: Installed. No version info available.
> dgml-utils: Installed. No version info available.
> docarray[hnswlib]: Installed. No version info available.
> elasticsearch: Installed. No version info available.
> esprima: Installed. No version info available.
> faiss-cpu: Installed. No version info available.
> feedparser: Installed. No version info available.
> fireworks-ai: 0.9.0
> friendli-client: Installed. No version info available.
> geopandas: Installed. No version info available.
> gitpython: Installed. No version info available.
> google-cloud-documentai: Installed. No version info available.
> gql: Installed. No version info available.
> gradientai: Installed. No version info available.
> hdbcli: Installed. No version info available.
> hologres-vector: Installed. No version info available.
> html2text: Installed. No version info available.
> httpx: 0.27.2
> httpx-sse: 0.4.0
> huggingface_hub: 0.25.2
> javelin-sdk: Installed. No version info available.
> jinja2: 3.1.4
> jq: Installed. No version info available.
> jsonpatch: 1.33
> jsonschema: 4.23.0
> lxml: Installed. No version info available.
> manifest-ml: Installed. No version info available.
> markdownify: Installed. No version info available.
> motor: Installed. No version info available.
> msal: Installed. No version info available.
> mwparserfromhell: Installed. No version info available.
> mwxml: Installed. No version info available.
> newspaper3k: Installed. No version info available.
> nlpcloud: Installed. No version info available.
> numexpr: Installed. No version info available.
> numpy: 1.26.4
> nvidia-riva-client: Installed. No version info available.
> oci: Installed. No version info available.
> openai: 1.13.3
> openapi-pydantic: Installed. No version info available.
> openlm: Installed. No version info available.
> oracle-ads: Installed. No version info available.
> orjson: 3.10.7
> packaging: 24.1
> pandas: 2.0.3
> pdfminer-six: Installed. No version info available.
> pgvector: Installed. No version info available.
> praw: Installed. No version info available.
> premai: Installed. No version info available.
> psychicapi: Installed. No version info available.
> py-trello: Installed. No version info available.
> pydantic: 2.9.2
> pyjwt: Installed. No version info available.
> pymupdf: Installed. No version info available.
> pypdf: Installed. No version info available.
> pypdfium2: Installed. No version info available.
> pyspark: Installed. No version info available.
> PyYAML: 6.0.2
> qdrant-client: Installed. No version info available.
> rank-bm25: Installed. No version info available.
> rapidfuzz: Installed. No version info available.
> rapidocr-onnxruntime: Installed. No version info available.
> rdflib: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: Installed. No version info available.
> rspace_client: Installed. No version info available.
> scikit-learn: Installed. No version info available.
> sentence-transformers: Installed. No version info available.
> SQLAlchemy: 2.0.35
> sqlite-vss: Installed. No version info available.
> streamlit: Installed. No version info available.
> sympy: 1.13.3
> telethon: Installed. No version info available.
> tenacity: 8.5.0
> tidb-vector: Installed. No version info available.
> tiktoken: 0.5.2
> timescale-vector: Installed. No version info available.
> torch: Installed. No version info available.
> tqdm: 4.66.5
> transformers: Installed. No version info available.
> tree-sitter: Installed. No version info available.
> tree-sitter-languages: Installed. No version info available.
> typer: Installed. No version info available.
> typing-extensions: 4.12.2
> upstash-redis: Installed. No version info available.
> vdms: 0.0.20
> xata: Installed. No version info available.
> xmltodict: Installed. No version info available.
| 🤖:bug | low | Critical |
2,601,738,790 | ollama | embedding generation failed. wsarecv: An existing connection was forcibly closed by the remote host. | ### What is the issue?
embedding model
When I submit a single fragment, it responds normally, but when I submit multiple fragments, an exception occurs.
I encountered this error on different Windows systems as well.
This issue occurs in both versions 0.3.14 and 0.4.0-rc3. However, I also tested versions 0.3.13 and 0.3.10, and they work perfectly.
```
[GIN] 2024/10/21 - 16:00:29 | 200 | 722.8624ms | 192.168.7.100 | POST "/api/embed"
ggml.c:13343: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
ggml.c:13343: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
time=2024-10-21T16:00:36.434+08:00 level=ERROR source=routes.go:434 msg="embedding generation failed" error="do embedding request: Post \"http://127.0.0.1:64075/embedding\": read tcp 127.0.0.1:64078->127.0.0.1:64075: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2024/10/21 - 16:00:36 | 500 | 6.5660285s | 192.168.7.100 | POST "/api/embed"
time=2024-10-21T16:01:00.723+08:00 level=INFO source=llama-server.go:72 msg="system memory" total="15.9 GiB" free="10.3 GiB" free_swap="8.8 GiB"
time=2024-10-21T16:01:00.726+08:00 level=INFO source=memory.go:346 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[10.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="687.0 MiB" memory.required.partial="0 B" memory.required.kv="12.0 MiB" memory.required.allocations="[687.0 MiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="548.0 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
time=2024-10-21T16:01:00.730+08:00 level=INFO source=llama-server.go:355 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\Administrator\\.ollama\\models\\blobs\\sha256-9e8e196fa3f73c32fb1b37503d5c28b166f4a96db54addd89927c47e4e40cf68 --ctx-size 2048 --batch-size 512 --embedding --threads 4 --no-mmap --parallel 1 --port 64090"
time=2024-10-21T16:01:00.782+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-10-21T16:01:00.791+08:00 level=INFO source=llama-server.go:534 msg="waiting for llama runner to start responding"
time=2024-10-21T16:01:00.792+08:00 level=INFO source=llama-server.go:568 msg="waiting for server to become available" status="llm server error"
time=2024-10-21T16:01:00.812+08:00 level=INFO source=runner.go:856 msg="starting go runner"
time=2024-10-21T16:01:00.829+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:64090"
llama_model_loader: loaded meta data with 23 key-value pairs and 389 tensors from C:\Users\Administrator\.ollama\models\blobs\shp4 _llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: bert.block_count u32 = 24
llama_model_loader: - kv 3: bert.context_length u32 = 512
llama_model_loader: - kv 4: bert.embedding_length u32 = 1024
llama_model_loader: - kv 5: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 6: bert.attention.head_count u32 = 16
llama_model_loader: - kv 7: bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 8: general.file_type u32 = 1
llama_model_loader: - kv 9: bert.attention.causal bool = false
llama_model_loader: - kv 10: bert.pooling_type u32 = 1
llama_model_loader: - kv 11: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 101
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 102
llama_model_loader: - kv 14: tokenizer.ggml.model str = bert
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,21128] = ["[PAD]", "[unused1]", "[unused2]", 0 _llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,21128] = [-1000.000000, -1000.000000, -1000.0 - _llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,21128] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,P3 _llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 19: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 21: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 22: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - type f32: 243 tensors
llama_model_loader: - type f16: 146 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 5
llm_load_vocab: token to piece cache size = 0.0769 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 21128
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 512
llm_load_print_meta: n_embd = 1024
llm_load_print_meta: n_layer = 24
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 1.0e-012
llm_load_print_meta: f_norm_rms_eps = 0.0e+000
llm_load_print_meta: f_clamp_kqv = 0.0e+000
llm_load_print_meta: f_max_alibi_bias = 0.0e+000
llm_load_print_meta: f_logit_scale = 0.0e+000
llm_load_print_meta: n_ff = 4096
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 0
llm_load_print_meta: pooling type = 1
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 512
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 335M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 324.47 M
llm_load_print_meta: model size = 619.50 MiB (16.02 BPW)
llm_load_print_meta: general.name = model
llm_load_print_meta: BOS token = 101 '[CLS]'
llm_load_print_meta: EOS token = 102 '[SEP]'
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: CLS token = 101 '[CLS]'
llm_load_print_meta: MASK token = 103 '[MASK]'
llm_load_print_meta: LF token = 0 '[PAD]'
llm_load_print_meta: EOG token = 102 '[SEP]'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.16 MiB
llm_load_tensors: CPU buffer size = 619.50 MiB
time=2024-10-21T16:01:01.048+08:00 level=INFO source=llama-server.go:568 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 192.00 MiB
llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.00 MiB
llama_new_context_with_model: CPU compute buffer size = 26.00 MiB
llama_new_context_with_model: graph nodes = 851
llama_new_context_with_model: graph splits = 1
time=2024-10-21T16:01:01.299+08:00 level=INFO source=llama-server.go:573 msg="llama runner started in 0.51 seconds"
llama_model_loader: loaded meta data with 23 key-value pairs and 389 tensors from C:\Users\Administrator\.ollama\models\blobs\sh c s3llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: bert.block_count u32 = 24
llama_model_loader: - kv 3: bert.context_length u32 = 512
llama_model_loader: - kv 4: bert.embedding_length u32 = 1024
llama_model_loader: - kv 5: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 6: bert.attention.head_count u32 = 16
llama_model_loader: - kv 7: bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 8: general.file_type u32 = 1
llama_model_loader: - kv 9: bert.attention.causal bool = false
llama_model_loader: - kv 10: bert.pooling_type u32 = 1
llama_model_loader: - kv 11: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 101
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 102
llama_model_loader: - kv 14: tokenizer.ggml.model str = bert
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,21128] = ["[PAD]", "[unused1]", "[unused2]", `w s3llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,21128] = [-1000.000000, -1000.000000, -1000.0 ~ s3llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,21128] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, u s3llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 19: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 21: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 22: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - type f32: 243 tensors
llama_model_loader: - type f16: 146 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 5
llm_load_vocab: token to piece cache size = 0.0769 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 21128
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 324.47 M
llm_load_print_meta: model size = 619.50 MiB (16.02 BPW)
llm_load_print_meta: general.name = model
llm_load_print_meta: BOS token = 101 '[CLS]'
llm_load_print_meta: EOS token = 102 '[SEP]'
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: CLS token = 101 '[CLS]'
llm_load_print_meta: MASK token = 103 '[MASK]'
llm_load_print_meta: LF token = 0 '[PAD]'
llm_load_print_meta: EOG token = 102 '[SEP]'
llm_load_print_meta: max token length = 48
llama_model_load: vocab only - skipping tensors
[GIN] 2024/10/21 - 16:01:01 | 200 | 701.8355ms | 192.168.7.100 | POST "/api/embed"
ggml.c:13343: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
ggml.c:13343: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
time=2024-10-21T16:01:08.177+08:00 level=ERROR source=routes.go:434 msg="embedding generation failed" error="do embedding request: Post \"http://127.0.0.1:64090/embedding\": read tcp 127.0.0.1:64093->127.0.0.1:64090: wsarecv: An existing connection was forcibly closed by the remote host."
```
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.14~0.4.6 | bug | medium | Critical |
2,601,744,712 | ollama | VPTQ Model Quantization Support in Ollama | Hi all,
We recently developed a fully open-source quantization method called VPTQ (Vector Post-Training Quantization) [https://github.com/microsoft/VPTQ](https://github.com/microsoft/VPTQ) which enables fast quantization of large language models (LLMs) down to 1-4 bits. The community has also helped release several models using this method [https://huggingface.co/VPTQ-community](https://huggingface.co/VPTQ-community). I am personally very interested in integrating VPTQ into ollama/llama.cpp.
One of the key advantages of VPTQ is that the dequantization method is very straightforward, relying only on a simple lookup table.
I would like to ask for guidance on how best to support this quantization method within Ollama, even if it's on my own fork. Specifically, which approach should I take?
1. Define a series of new models (e.g., vptq-llama3.1) using existing data types (int32, fp16), and hide the model dequantization within a separate dequant op.
2. Define a new quantization data type (e.g., a custom lookup table data structure)?
I’d love to hear your thoughts or any suggestions on how to proceed!
Thank you!
Yang | feature request | low | Major |
2,601,767,250 | rust | Tracking Issue for enforcing test doc comments | Please use the [t-compiler zulip thread](https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Better.20document.20the.20intent.20of.20a.20test.3F) for specific discussions or open new issues, but please feel free to post consensus updates or progress updates on this tracking issue.
Zulip thread: https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Better.20document.20the.20intent.20of.20a.20test.3F
### Context
Way too often, tests get added, but they don't have any included context in the form of comments. This includes things like (or rather, the lack of important information such as):
- What the test is trying to check.
- Test context:
- Related issues (issue number for regression tests),
- Previous implementation PRs
- RFC / Rust Reference / external docs links (syscalls, platform APIs, DWARF standards, ISA docs, you name it)
- When suitable, how/why the regression occurred / was reachable, how it was fixed, and how does the test exercise the implementation such that it will catch the same regression.
A good litmus test for test doc comments is: **5 years later, will I be able to determine what the test is trying to check without having to jump through a bunch of github issues/PRs via `git blame`?**
So, it might be valuable to add an automated check to enforce that every new test is checked in with a suitable test doc comment.
### Implementation steps
- [ ] 1. Briefly survey existing test suites and see what kind of doc comment style are suitable.
- Notably, which test suites should / should not have doc comments enforced, e.g. `rustdoc-gui` and other special test suites may need special handling or exclusions.
- Figure out a brief mechanism such that *new* tests receive the enforcement but old tests are permitted in an allowlist, and how to bless that allowlist.
- [ ] 2. File an MCP describing the previous rationale for enforcing test doc comments, as well as the concrete proposal for how it will be implemented and the proposed UX. This *may* need to be encoded as T-compiler PR/review policy so may need a full team FCP.
- [ ] 3. Implement the test doc comment enforcement mechanism, including sufficient self-test coverage.
- [ ] 4. Update rustc-dev-guide ui test walkthrough + basic test description + best practices.
- [ ] 5. Brief T-compiler/T-rustdoc teams about the new test doc comment enforcement.
### Discussions
- https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Better.20document.20the.20intent.20of.20a.20test.3F
### Unresolved questions / concerns
None yet, please update issue description as suitable. | A-testsuite,E-hard,T-compiler,T-bootstrap,C-tracking-issue,A-compiletest | low | Minor |
2,601,772,557 | ant-design | Button组件在低端机器下开启loading状态后无法正确阻止onClick事件 | ### Reproduction link
[](https://codesandbox.io/s/jia-zai-zhong-zhuang-tai-antd-5-21-4-forked-fxf6yy?file=/demo.tsx)
### Steps to reproduce
1. 打开开发者工具的性能面板
2. 设置CPU 4x减速或更高
3. 快速点击按钮
### What is expected?
只触发一次onClick事件
### What is actually happening?
触发了多次onClick事件

| Environment | Info |
| --- | --- |
| antd | 5.21.4 |
| React | 18 |
| System | MacOS |
| Browser | Edge 130 |
---
同时手动设置disabled属性可以避免这个问题
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,improvement | low | Minor |
2,601,793,485 | ui | [bug]: Dialog is taking over sweetalert element | ### Describe the bug
I have problem with dialog and sweetalert to work together. When you open dialog and you open sweetalert inside dialog, the sweetalert will be behind the dialog, even though it appears that it is in front .
I already change z-index and and it didn't work
### Affected component/components
Dialog
### How to reproduce
1. open dialog
2. Open Sweetalert inside dialog
### Codesandbox/StackBlitz link
https://codesandbox.io/p/sandbox/7mds8v
### Logs
_No response_
### System Info
```bash
Fedora 40
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,601,794,132 | Python | Add set matriz zero problem in array data structure. | ### Feature description
Problem: Given an m x n integer matrix matrix, if an element is 0, set its entire row and column to 0's.
I want to add this under hacktober fest please assign it to me | enhancement | medium | Minor |
2,601,815,025 | rust | function "in_external_macro” judgment is incorrect | I use the derive macro provided by the clap crate in my code, and then call the in_external_macro function in the context of a custom lint diagnostic code. The function determines that the code derived from the derive macro is local code.
I tried this code:
```rust
fn main() {
let _a = 1;
}
use clap::Args;
#[derive(Args)]
pub struct ToFrom {
#[arg(long = "workspace")]
a: i32,
}
```
dependency is clap = { version = "=4.2.0", features = ["derive"] }
My lint code is:
```rust
fn check_block(&mut self, cx: &LateContext<'tcx>, block: &'tcx Block<'_>) {
let res = in_external_macro(cx.sess(), block.span);
info!("check block in_external_macro res {:?}", res);
if res == false {
info!("check block {:?}", block);
}
}
```
I expected to see this happen: I think all code generated by derived macros should belong to external macros derived from external crates.
Instead, this happened: Some of the code generated by the derived macro is considered to be local code
The error macro code is expanded as follows.
The blocks inside the .arg function are considered native code.
```rust
.arg({
#[allow(deprecated)]
let arg = clap::Arg::new("a")
.value_name("A")
.required(true && clap::ArgAction::Set.takes_values())
.value_parser({
use ::clap_builder::builder::via_prelude::*;
let auto = ::clap_builder::builder::_AutoValueParser::<
i32,
>::new();
(&&&&&&auto).value_parser()
})
.action(clap::ArgAction::Set);
let arg = arg.long("workspace");
let arg = arg.required(false);
arg
});
```
`rustc --version --verbose`:
```
+nightly-2024-03-07
```
| T-compiler,C-bug,A-proc-macros,E-needs-investigation | low | Critical |
2,601,849,665 | kubernetes | Pods support requesting resources for each container at runtime. | ### What happened?
In a pod containing three containers A, B, and C, with the execution order of operators being sequential (the second one starts only after the first one completes), can we request CPU and GPU information during the operator runtime?
### What did you expect to happen?
In a pod containing three containers A, B, and C, with the execution order of operators being sequential (the second one starts only after the first one completes), can we request CPU and GPU information during the operator runtime?
### How can we reproduce it (as minimally and precisely as possible)?
In a pod containing three containers A, B, and C, with the execution order of operators being sequential (the second one starts only after the first one completes), can we request CPU and GPU information during the operator runtime?
### Anything else we need to know?
_No response_
### Kubernetes version
1.20
### Cloud provider
not cloud
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/support,sig/node,needs-triage | low | Major |
2,601,898,443 | rust | crash: lazy type alias: stack overflow | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
#![feature(lazy_type_alias)]
type A = [[Y; {}]; {
type A = [A; {}];
}];
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (662180b34 2024-10-20)
binary: rustc
commit-hash: 662180b34d95f72d05b7c467b0baf4d23d36b1e1
commit-date: 2024-10-20
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
### Error output
```
<output>
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error[E0412]: cannot find type `Y` in this scope
--> /tmp/crash.rs:2:12
|
2 | type A = [[Y; {}]; {
| ^ not found in this scope
warning: the feature `lazy_type_alias` is incomplete and may not be safe to use and/or cause compiler crashes
--> /tmp/crash.rs:1:12
|
1 | #![feature(lazy_type_alias)]
| ^^^^^^^^^^^^^^^
|
= note: see issue #112792 <https://github.com/rust-lang/rust/issues/112792> for more information
= note: `#[warn(incomplete_features)]` on by default
error[E0601]: `main` function not found in crate `crash`
--> /tmp/crash.rs:4:4
|
4 | }];
| ^ consider adding a `main` function to `/tmp/crash.rs`
error[E0308]: mismatched types
--> /tmp/crash.rs:2:15
|
2 | type A = [[Y; {}]; {
| ^^ expected `usize`, found `()`
error: rustc interrupted by SIGSEGV, printing backtrace
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x37207d3) [0x732bf89207d3]
/usr/lib/libc.so.6(+0x3d1d0) [0x732bfc5bb1d0]
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x5011e81) [0x732bfa211e81]
### cycle encountered after 3 frames with period 4
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x50124ce) [0x732bfa2124ce]
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x5011e86) [0x732bfa211e86]
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x50124ce) [0x732bfa2124ce]
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x5011e86) [0x732bfa211e86]
### recursed 63 times
/home/matthias/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/librustc_driver-5396912e8af1f65d.so(+0x50124ce) [0x732bfa2124ce]
note: rustc unexpectedly overflowed its stack! this is a bug
note: maximum backtrace depth reached, frames may have been lost
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
```
</p>
</details>
| I-crash,T-compiler,C-bug,F-lazy_type_alias | low | Critical |
2,602,027,368 | storybook | [Bug]: Module "node"process" has been externalized - Svelte | Addon-Test | ### Describe the bug
When setting up `@storybook/experimental-addon-test` in a Svelte 5 project and running tests, the following warning gets printed to the terminal for each story file:
```
Module "node:process" has been externalized for browser compatibility. Cannot access "node:process.cwd" in client code. See https://vite.dev/guide/troubleshooting.html#module-externalized-for-browser-compatibility for more details.
```
### Reproduction steps
1. Create a Svelte 5 sandbox
2. Run `yarn vitest` | bug,help wanted,svelte,sev:S3,addon: test | low | Critical |
2,602,027,460 | vscode | VS Code installation corrupts when update comes during running WSL session |
Type: <b>Bug</b>
Issue:
When VS Code is running, it is connected to WSL and update comes - installation is being corrupted.
Repro steps:
- Install VS Code using system installer (not tested for user installer)
- Open directory in WSL (tested against WSL2)
- Wait for update to come out (issue appears even with disabled auto-update)
- Try to open another instance of VS Code - it should be broken for both Windows and WSL environments
Workarounds:
- Use system installer to make new installation "on top" of existing one
- Try to catch update without opened WSL environment - when only Windows directories are opened update goes smoothly
- You can use old system installer and then update VS Code from Help menu with only Windows directories opened - it also works
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i9-13980HX (32 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.62GB (47.63GB free)|
|Process Argv|--crash-reporter-id 1e948e9a-f2a3-48d2-8f25-b094576126d2|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (9)</summary>
Extension|Author (truncated)|Version
---|---|---
solargraph|cas|0.24.1
erb|Cra|0.0.1
vscode-html-css|ecm|2.0.10
endwise|kai|1.5.1
vscode-docker|ms-|1.29.3
vscode-kubernetes-tools|ms-|1.3.18
remote-wsl|ms-|0.88.4
vscode-yaml|red|1.15.0
ruby-lsp|Sho|0.8.7
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
autoexpandse:31146404
iacca2:31156134
notype1:31157159
5fd0e150:31155592
dwcopilot:31162478
iconenabled:31158251
```
</details>
<!-- generated by issue reporter --> | bug,install-update,WSL,confirmation-pending | low | Critical |
2,602,039,356 | PowerToys | Mouse without border deactivates the '|' and the '@' on my keyboard | ### Microsoft PowerToys version
0.81.1 (not checked on v0.85.1 yet)
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
I am using a keyboard ' BEFR'
when I do activate the 'mouse without border' - I loose usage of the keys
'AltGr+&' which should issue a '|' and
'Alt Gr+é' which should issue a '@'
When 'mouse without border is activated
'AltGr+&' produces '&' and
'Alt Gr+é' produces '@' - in both cases "after a certain time / strikes on the key...
Note that the other Alt Gr keys remain working correctly : 'Alt Gr+"' = '#', 'Alt Gr+§' = '^', 'Alt Gr+ç' = '{'...
### ✔️ Expected Behavior
I do not see that there is a remapping of the keyboard in the 'Mouse without border toy'
So, I was expecting my keyboard working normally.. and producing '|' or '@'
### ❌ Actual Behavior
producing respectively '&' or 'é' at the 2nd stroke (the 1st stroke on 'AltGr'+'&' produces nothing, same for 'é')
### Other Software
all softwares were affected the same | Issue-Bug,Needs-Triage | low | Minor |
2,602,171,951 | pytorch | Cholesky factorization of sparse tensors | ### 🚀 The feature, motivation and pitch
For many applications involving sparse matrices, the sparse Cholesky factorization plays an important role in enabling efficient computations.
I am working within the field of spatial statistics, and here the Cholesky factorization is actively used for sampling from distributions, computing likelihoods and solving linear systems. As the size of the matrices in my statistical model grows, the more time consuming it becomes it to use the existing dense Cholesky factorization for sparse tensors. Therefore, since an efficient Cholesky factorization of sparse tensors is not yet implemented, it severely limits the models I can use within PyTorch.
I believe the sparse Cholesky factorization and its related operations can be of great relevance for expanding the functionality of the `torch.sparse` package!
### Alternatives
The [theseus](https://github.com/facebookresearch/theseus) project seems to have some support for solving sparse linear systems. SImilarly, [torchsparsegradutils](https://github.com/cai4cai/torchsparsegradutils) contains a wrapper for a Jax sparse solver. However, other computations involving the Cholesky factorization (like computing a log-determinant) does not seem to be supported.
Ideally, similar functionality as in the `sksparse.cholmod` library ([link](https://scikit-sparse.readthedocs.io/en/latest/cholmod.html)) would be preferable. Here one can compute the Cholesky factorization, and then easily use this to perform relevant operations like solving systems, computing determinants etc.
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jianyuh @mruberry @walterddr @xwang233 @Lezcano | module: sparse,feature,triaged,module: linear algebra | low | Minor |
2,602,176,636 | opencv | The new dnn engine doesn't support forward to a specified layer | ### Describe the feature and motivation
The old dnn engine has forward(layerName) method, that runs the model graph to the specified layer. The new dnn engine layer doesn't support this feature.
Several tests use this features so they are currently disabled with the new engine:
- Test_TFLite.max_unpooling
Reference: https://github.com/opencv/opencv/pull/26330
### Additional context
_No response_ | feature,category: dnn | low | Minor |
2,602,194,514 | electron | App freezes with custom spell-checker on Electron 31.7.1 | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.7.1
### What operating system(s) are you using?
Other Linux
### Operating System Version
Linux speedforce 6.11.4-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 17 Oct 2024 20:53:41 +0000 x86_64 GNU/Linux
### What arch are you using?
x64
### Last Known Working Electron version
31.6
### Expected Behavior
The app should not hit 100% CPU usage when using a custom spell checker and the UI should not freeze.
### Actual Behavior
When using a custom spell checker, the app hits 100% CPU usage and the UI freezes when typing a lot of text or pasting in text.
Everything was working on older versions of electron.
### Testcase Gist URL
_No response_
### Additional Information
Use https://github.com/MicroPad/MicroPad-Electron with v31.7.1 in the `package.json` and the problem occurs. I'll see if I can make a better repro when I have time.
After updating electron:
1. `yarn start`
2. Exit the whats new modal if it pops up
3. hit the 'N' key
4. Hit the 'N' key again
5. Start typing
I haven't tried v32 or v33 because MicroPad Electron does not work at all with them. | platform/linux,bug :beetle:,has-repro-gist,31-x-y | low | Critical |
2,602,197,398 | neovim | `<NL>` and `<C-J>` are sometimes equal and sometimes not equal | ### Problem
`<NL>` and `<C-J>` and behaves like the same key in some cases and like different keys in the other cases.
### Steps to reproduce
* Run `nvim --clean` (tested on current HEAD and 0.10.2), and then run the following command.
```
nnoremap <NL> <Cmd>echom 'NL'<CR>
```
If you type `<C-J>`, `NL` is printed. I believe this is expected behavior in most terminals that don't distinguish `<C-J>` and `<NL>`.
In this sense, `<C-J>` and `<NL>` are equal.
* `keytrans()` agrees on that.
The following command prints `<NL> <NL>`
```
echo keytrans(nvim_replace_termcodes('<C-J>', v:true, v:true, v:true)) keytrans(nvim_replace_termcodes('<NL>', v:true, v:true, v:true))
```
* But the problem is that you can map both `<C-J>` and `<NL>` somehow `<C-J>` is prioritized.
Run the following commands in fresh nvim instance in that order, and type `<C-J>`.
```
nnoremap <C-J> <Cmd>echom 'C-J'<CR>
nnoremap <NL> <Cmd>echom 'NL'<CR>
```
It prints `C-J`. This means that `<C-J>` and `<NL>` are actually different.
(NOTE: In vim 9.1.771, this prints `NL`.)
Related: https://github.com/hrsh7th/nvim-cmp/pull/1935#issuecomment-2426250159
### Expected behavior
`<NL>` and `<C-J>` should be either different xor the same; not both.
### Nvim version (nvim -v)
current master and 0.10.2
### Vim (not Nvim) behaves the same?
no
### Operating system/version
ubuntu 22.04
### Terminal name/version
GNOME Terminal 3.44.0
### $TERM environment variable
tmux-256color
### Installation
manually built / appimage | bug,documentation,api,input,has:workaround,mappings | low | Major |
2,602,200,005 | next.js | setAssetPrefix not work in custom server | ### Link to the code that reproduces this issue
https://github.com/yutingzhao1991/nextSetAssetPrefixBug
### To Reproduce
1. Clone https://github.com/yutingzhao1991/nextSetAssetPrefixBug
2. npm install
3. node index.js
### Current vs. Expected behavior
https://github.com/yutingzhao1991/nextSetAssetPrefixBug/blob/main/index.js#L14 This line of code doesn't work, I hope to set the address of static resources through `setAssetPrefix`, different server environments need to request different addresses.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 22.3.0: Mon Jan 30 20:39:46 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.12.2
npm: 10.5.0
Yarn: N/A
pnpm: 8.14.0
Relevant Packages:
next: 14.2.15 // Latest available version is detected (14.2.15).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
Similar to issue https://github.com/vercel/next.js/issues/59940, but it seems like this problem has been fixed by PR https://github.com/vercel/next.js/pull/61676. However, I tried the latest version of next and the issue still persists. | bug | low | Critical |
2,602,201,285 | tensorflow | Overflow in `tf.raw_ops.Fill` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17.0 tf2.16.1
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Overflow in `tf.raw_ops.Fill` when there are too large values in `dims`.
### Standalone code to reproduce the issue
```shell
https://colab.research.google.com/drive/1GDBN4lheNUIW704hsXVt3lW55UJue97S?usp=sharing
```
### Relevant log output
```shell
Kill
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,602,225,490 | excalidraw | Webview embedding error | In the H5 project, the drawing board plugin was used, and then webview embedding was used in the WeChat mini program to generate errors when drawing rectangles, circles, diamonds, and inserting images
TypeError: Failed to execute 'roundRect' on 'CanvasRenderingContext2D': The provided value cannot be converted to a sequence.

| More information needed / cannot reproduce | low | Critical |
2,602,237,494 | pytorch | `torch.__config__.show()` silently initialises CUDA (?); forked processes fail with uninitialised CUDA | ### 🐛 Describe the bug
Hey,
We recently upgraded to `torch==2.3.0+cu121` and we had some code that used forked processes to do inference. Previously, we took good care to initialise CUDA inside the forked processes, and not before. And this worked fine.
But one of our dependencies (`torch_geometric`) calls `torch.__config__.show()` upon import and this causes forked processes to fail (`RuntimeError: CUDA error: initialization error`). I am not sure why, since CUDA does not seem to actually get initialised by it (see below), and I was not able to find any good discussion on this.
```python
import torch
import torch.multiprocessing as torch_mp
def f(x, device):
y = x.to(device) # <- CUDA initialisation fails here
return y*y
if __name__ == "__main__":
torch.__config__.show() # <- comment this to make the script work
# this does _not_ initialise CUDA
assert not torch.cuda.is_initialized()
processes = []
x = torch.rand(1000)
for i in range(10):
p = (
torch_mp
# .get_context("spawn") # <- uncomment this to make the script work
.Process(target=f, args=(x, torch.device(f"cuda:{i % 2}"),))
)
p.start()
processes.append(p)
for p in processes:
p.join()
```
This produces:
```
Process Process-10:
Traceback (most recent call last):
File "/opt/pyenv/versions/3.10.14/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/opt/pyenv/versions/3.10.14/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/jovyan/ml-rosalind-endpoints/some.py", line 6, in f
y = x.to(device) # <- CUDA initialisation fails here
File "/opt/pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
torch._C._cuda_init()
RuntimeError: CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
Here is the output from `python collect_env.py` for reference:
```
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.14 (main, Oct 8 2024, 16:07:53) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7006.6401
CPU min MHz: 1800.0000
BogoMIPS: 7187.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] discrete-key-value-bottleneck-pytorch==0.1.1
[pip3] enformer-pytorch==0.8.8
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.68
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.2.4
[pip3] pytorch_optimizer==3.1.2
[pip3] torch==2.3.0+cu121
[pip3] torch-geometric==2.6.0
[pip3] torchmetrics==1.4.0
[pip3] torchvision==0.18.0+cu121
[pip3] triton==2.3.0
[pip3] vector-quantize-pytorch==1.17.4
[conda] Could not collect
```
Help would be greatly appreciated!
cc @VitalyFedyunin @albanD @ptrblck @msaroufim | module: multiprocessing,module: cuda,triaged | low | Critical |
2,602,278,970 | opencv | opencv keep failing in window | ### System Information
opencv v 4.10.0
window 11
visual studio 2022
python 3.11
### Detailed description
keep crashing for unknown reason all previous or old version opencv work fine 4.10.0 version has this issue i have check 4.90 work fine

### Steps to reproduce
```
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
using namespace cv;
int main()
{
cv::Mat img = cv::imread("O:/operadown/oldceleberity/2081.jpg");
return 0;
}
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | category: build/install,incomplete,platform: win32 | low | Critical |
2,602,344,281 | godot | 【Android】Modifying the Android system's dark mode causes the Godot example application to crash. | ### Tested versions
- Reproducible in: 4.3.0.stable
### System information
Windows 10-Godot v4.3.0,stable
### Issue description
After running the Godot Android example on an Android device, modifying the Android system's dark mode causes a crash.
### Steps to reproduce
1、Clone the repository: https://github.com/m4gr3d/Godot-Android-Samples.git(commit id: cc561b57ceaf53aea3ba037589457d25bc1c988a)
2、Use Android Studio to open this project
3、Run on Android devices
4、Use adb commands to change uiMode, such as "adb shell cmd uimode night yes" or "adb shell cmd uimode night no"
5、The app will crash
### Minimal reproduction project (MRP)
```
10-21 19:58:49.369 2263 2382 D LdAnalyticsSDK: responseCode = 200
10-21 19:58:49.841 1433 1433 I ldinit : type=1400 audit(0.0:1167): avc: denied { read } for name="partitions" dev="proc" ino=4026532050 scontext=u:r:ldinit:s0 tcontext=u:object_r:proc:s0 tclass
=file permissive=1
10-21 19:58:49.841 1433 1433 I ldinit : type=1400 audit(0.0:1167): avc: denied { open } for path="/proc/partitions" dev="proc" ino=4026532050 scontext=u:r:ldinit:s0 tcontext=u:object_r:proc:s0
tclass=file permissive=1
10-21 19:58:49.841 1433 1433 W ldinit : type=1300 audit(0.0:1167): arch=c000003e syscall=257 success=yes exit=6 a0=ffffff9c a1=7ffff7a18ae5 a2=0 a3=0 items=0 ppid=1 auid=4294967295 uid=0 gid=0
euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 exe="/system/bin/ldinit" subj=u:r:ldinit:s0 key=(null)
10-21 19:58:49.841 1357 1357 W auditd : type=1327 audit(0.0:1167): proctitle="/system/bin/ldinit"
10-21 19:58:49.841 1357 1357 W auditd : type=1320 audit(0.0:1167):
10-21 19:58:49.910 2457 2465 W System : A resource failed to call close.
10-21 19:58:51.778 3216 3232 W Bootstrap: Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x00c93f61]'
10-21 19:58:44.757 2109 2109 W ContextImpl: Calling a method in the system process without a qualified user: android.app.ContextImpl.startService:1531 android.content.ContextWrapper.startService
:664 android.content.ContextWrapper.startService:664 com.android.coreservice.CoreBroadcastReceiver.onReceive:69 android.app.ActivityThread.handleReceiver:3424
10-21 19:58:52.169 1453 1529 E storaged: getDiskStats failed with result NOT_SUPPORTED and size 0
10-21 19:58:52.168 1453 1453 I storaged: type=1400 audit(0.0:1169): avc: denied { call } for scontext=u:r:storaged:s0 tcontext=u:r:init:s0 tclass=binder permissive=1
10-21 19:59:00.002 1566 1579 E memtrack: Couldn't load memtrack module
10-21 19:59:00.002 1566 1579 W android.os.Debug: failed to get memory consumption info: -1
10-21 19:59:10.390 1566 4462 I ActivityManager: Config changes=200 {1.0 460mcc65535mnc [zh_CN] ldltr sw1080dp w1920dp h1056dp 160dpi xlrg long land finger qwerty/v/v -nav/h winConfig={ mBounds=R
ect(0, 0 - 0, 0) mAppBounds=Rect(0, 0 - 1920, 1080) mWindowingMode=fullscreen mActivityType=undefined} s.18}
10-21 19:59:10.392 1566 4462 I ActivityManager: Override config changes=200 {1.0 460mcc65535mnc [zh_CN] ldltr sw1080dp w1920dp h1056dp 160dpi xlrg long land finger qwerty/v/v -nav/h winConfig={
mBounds=Rect(0, 0 - 1920, 1080) mAppBounds=Rect(0, 0 - 1920, 1080) mWindowingMode=fullscreen mActivityType=undefined} s.18} for displayId=0
10-21 19:59:10.406 1566 1566 V SettingsProvider: Notifying for 0: content://settings/secure/ui_night_mode
10-21 19:59:10.410 4702 4702 V Godot : OnPause: MyGodotFragment{f5ea767} (af15eeb4-a92e-419f-bec7-9e8b7a4b78a8 id=0x7f08005e)
10-21 19:59:10.410 1429 1626 D sensor_hal: cb_setDelay handle=0 delay-ns=200000000
10-21 19:59:10.405 2197 2197 I utmethod.pinyin: type=1400 audit(0.0:1170): avc: denied { open } for path="/data/local/cfg-ainaz/input" dev="sdb2" ino=131776 scontext=u:r:platform_app:s0:c512,c76
8 tcontext=u:object_r:system_data_file:s0 tclass=file permissive=1
10-21 19:59:10.411 1429 1626 D sensor_hal: cb_activate handle=2 enabled=0
10-21 19:59:10.411 1429 1626 D sensor_hal: cb_activate handle=4 enabled=0
10-21 19:59:10.411 1429 1626 D sensor_hal: cb_activate handle=5 enabled=0
10-21 19:59:10.413 4702 4734 D EGL_adreno: eglMakeCurrent: 0x7fff6c050940: ver 3 1 (tinfo 0x7fff6c00dfe0)
10-21 19:59:10.414 4702 4702 I HostConnection: HostConnection::HostConnection: pid=4702, tid=4702, this=0x7fff68ac64e0
10-21 19:59:10.414 4702 4702 I : fastpipe: Connect success
10-21 19:59:10.414 4702 4702 D HostConnection: HostRPC::connect sucess: app=fhuyakou.godot.app.android.gltfviewer:godotAndroidSamples, pid=4702, tid=4702, this=0x7fff5a134800
10-21 19:59:10.415 4702 4702 D HostConnection: queryAndSetGLESMaxVersion select gles-version: 3.1 hostGLVersion:46 process:fhuyakou.godot.app.android.gltfviewer:godotAndroidSamples
10-21 19:59:10.416 4702 4702 V Godot : OnStop: MyGodotFragment{f5ea767} (af15eeb4-a92e-419f-bec7-9e8b7a4b78a8 id=0x7f08005e)
10-21 19:59:10.417 2197 2205 W System : A resource failed to call close.
10-21 19:59:10.417 2197 2205 I chatty : uid=10047(com.android.inputmethod.pinyin) FinalizerDaemon identical 2 lines
10-21 19:59:10.417 2197 2205 W System : A resource failed to call close.
10-21 19:59:10.418 1436 1509 E BufferQueueProducer: [SurfaceView - fhuyakou.godot.app.android.gltfviewer/fhuyakou.godot.app.android.gltfviewer.MainActivity#0] requestBuffer: BufferQueue has no c
onnected producer
10-21 19:59:10.418 4702 4733 E Surface : dequeueBuffer: IGraphicBufferProducer::requestBuffer failed: -19
10-21 19:59:10.418 1436 2092 E BufferQueueProducer: [SurfaceView - fhuyakou.godot.app.android.gltfviewer/fhuyakou.godot.app.android.gltfviewer.MainActivity#0] cancelBuffer: BufferQueue has no co
nnected producer
10-21 19:59:10.418 4702 4733 E EGL_adreno: tid 4733: swapBuffers(581): error 0x300d (EGL_BAD_SURFACE)
10-21 19:59:10.405 2197 2197 W utmethod.pinyin: type=1300 audit(0.0:1170): arch=c000003e syscall=257 success=yes exit=35 a0=ffffff9c a1=7fff68afa600 a2=0 a3=0 items=0 ppid=1399 auid=4294967295 u
id=10047 gid=10047 euid=10047 suid=10047 fsuid=10047 egid=10047 sgid=10047 fsgid=10047 tty=(none) ses=4294967295 exe="/system/bin/app_process64" subj=u:r:platform_app:s0:c512,c768 key=(null)
10-21 19:59:10.418 4702 4702 V Godot : OnDestroy: MyGodotFragment{f5ea767} (af15eeb4-a92e-419f-bec7-9e8b7a4b78a8 id=0x7f08005e)
10-21 19:59:10.420 4702 4733 W GLThread: eglSwapBuffers failed: EGL_BAD_SURFACE
10-21 19:59:10.420 4702 4733 D GLThread: Exiting render thread
10-21 19:59:10.420 4702 4733 D GodotRenderer: Destroying Godot Engine
10-21 19:59:10.405 1357 1357 W auditd : type=1320 audit(0.0:1170):
10-21 19:59:10.421 4702 4733 I godot : XR: Clearing primary interface
10-21 19:59:10.421 4702 4733 I godot : XR: Removed interface "Native mobile"
10-21 19:59:10.421 4702 4733 I godot : XR: Removed interface "OpenXR"
10-21 19:59:10.423 4702 4733 D : PlayerBase::stop() from IPlayer
10-21 19:59:10.423 4702 4733 D AudioTrack: stop() called with 1131788 frames delivered
10-21 19:59:10.424 1706 2010 E bt_btif : register_notification_rsp: Avrcp device is not connected, handle: 0x0
10-21 19:59:10.424 1706 2010 I chatty : uid=1002(bluetooth) BluetoothAvrcpH identical 4 lines
10-21 19:59:10.424 1706 2010 E bt_btif : register_notification_rsp: Avrcp device is not connected, handle: 0x0
10-21 19:59:10.430 4702 4733 E eglCodecCommon: removeVertexArrayObject: ERROR: 6 not found in VAO state!
10-21 19:59:10.445 4702 4733 V Godot : OnGodotTerminating
10-21 19:59:10.446 1436 1509 E BufferQueueProducer: [SurfaceView - fhuyakou.godot.app.android.gltfviewer/fhuyakou.godot.app.android.gltfviewer.MainActivity#0] disconnect: not connected (req=1)
10-21 19:59:10.447 4702 4733 W libEGL : EGLNativeWindowType 0x7fff68b17010 disconnect failed
10-21 19:59:10.448 4702 4733 I HostConnection: HostConnection::~HostConnection, pid=4702, tid=4733, this=0x7fff68ad2dc0, m_stream=0x7fff68b9e140
10-21 19:59:10.449 4702 4733 I : fastpipe: close connect
10-21 19:59:10.453 4702 4702 V Godot : OnDestroy: MyGodotFragment{f5ea767} (af15eeb4-a92e-419f-bec7-9e8b7a4b78a8 id=0x7f08005e)
10-21 19:59:10.455 2109 2109 W ContextImpl: Calling a method in the system process without a qualified user: android.app.ContextImpl.startService:1531 android.content.ContextWrapper.startService
:664 android.content.ContextWrapper.startService:664 com.android.coreservice.CoreBroadcastReceiver.onReceive:53 android.app.ActivityThread.handleReceiver:3424
10-21 19:59:10.455 1566 1579 E memtrack: Couldn't load memtrack module
10-21 19:59:10.455 1566 1579 W android.os.Debug: failed to get memory consumption info: -1
10-21 19:59:10.461 4702 4702 I tAndroidSamples: type=1400 audit(0.0:1171): avc: denied { read } for name="Pictures" dev="sdcardfs" ino=1572867 scontext=u:r:untrusted_app:s0:c93,c256,c512,c768 tc
ontext=u:object_r:sdcardfs:s0 tclass=lnk_file permissive=1
10-21 19:59:10.476 1436 1436 W SurfaceFlinger: couldn't log to binary event log: overflow.
10-21 19:59:10.476 1436 1436 W SurfaceFlinger: couldn't log to binary event log: overflow.
10-21 19:59:10.485 1566 1573 W System : A resource failed to call close.
10-21 19:59:10.486 1566 1573 I chatty : uid=1000(system) FinalizerDaemon identical 48 lines
10-21 19:59:10.486 1566 1573 W System : A resource failed to call close.
10-21 19:59:10.489 1436 2092 W SurfaceFlinger: Attempting to destroy on removed layer: 91b3ec AssistPreviewPanel#0
10-21 19:59:10.493 4702 4702 V Godot : OnCreate: MyGodotFragment{ffba1d} (af15eeb4-a92e-419f-bec7-9e8b7a4b78a8 id=0x7f08005e)
10-21 19:59:10.493 4702 4702 V Godot : Initializing Godot plugin registry
10-21 19:59:10.494 4702 4702 V Godot : OnInitNativeLayer: MyGodotFragment{ffba1d} (af15eeb4-a92e-419f-bec7-9e8b7a4b78a8 id=0x7f08005e)
10-21 19:59:10.494 4702 4702 V Godot : Godot native layer initialization completed: true
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_abstract_class (./core/object/class_db.h:216)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.496 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: Condition "singleton != nullptr" is true.
10-21 19:59:10.496 4702 4702 E godot : at: ResourceUID (core/io/resource_uid.cpp:263)
10-21 19:59:10.496 4702 4702 E godot : USER ERROR: IP singleton already exist.
10-21 19:59:10.496 4702 4702 E godot : at: create (core/io/ip.cpp:335)
10-21 19:59:10.498 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.498 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.498 4702 4702 E godot : USER ERROR: Couldn't load file 'res://project.binary', error code 19.
10-21 19:59:10.498 4702 4702 E godot : at: _load_settings_text_or_binary (core/config/project_settings.cpp:803)
10-21 19:59:10.498 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.498 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.498 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.498 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.499 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.499 4702 4702 E godot : at: register_class (./core/object/class_db.h:201)
10-21 19:59:10.499 4702 4702 E godot : USER ERROR: Parameter "t" is null.
10-21 19:59:10.499 4702 4702 E godot : at: register_custom_instance_class (./core/object/class_db.h:269)
--------- beginning of crash
10-21 19:59:10.499 4702 4702 F libc : FORTIFY: pthread_mutex_lock called on a destroyed mutex (0x7fff5a50aef8)
10-21 19:59:10.499 4702 4702 F libc : Fatal signal 6 (SIGABRT), code -6 (SI_TKILL) in tid 4702 (tAndroidSamples), pid 4702 (tAndroidSamples)
10-21 19:59:10.509 1436 2092 W SurfaceFlinger: Attempting to destroy on removed layer: 7dc22d8 DockedStackDivider#0
10-21 19:59:10.512 4759 4759 E cutils-trace: Error opening trace file: No such file or directory (2)
10-21 19:59:10.518 4760 4760 I crash_dump64: obtaining output fd from tombstoned, type: kDebuggerdTombstone
10-21 19:59:10.518 1460 1460 I /system/bin/tombstoned: received crash request for pid 4702
10-21 19:59:10.519 4760 4760 I crash_dump64: performing dump of process 4702 (target tid = 4702)
10-21 19:59:10.521 4760 4760 F DEBUG : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
10-21 19:59:10.521 4760 4760 F DEBUG : Build fingerprint: 'samsung/star2qltezh/star2qltechn:9/PQ3B.190801.06131105/G9650ZHU2ARC6:user/release-keys'
10-21 19:59:10.521 4760 4760 F DEBUG : Revision: '0'
10-21 19:59:10.521 4760 4760 F DEBUG : ABI: 'x86_64'
10-21 19:59:10.521 4760 4760 F DEBUG : pid: 4702, tid: 4702, name: tAndroidSamples >>> fhuyakou.godot.app.android.gltfviewer:godotAndroidSamples <<<
10-21 19:59:10.521 4760 4760 F DEBUG : signal 6 (SIGABRT), code -6 (SI_TKILL), fault addr --------
10-21 19:59:10.521 4760 4760 F DEBUG : Abort message: 'FORTIFY: pthread_mutex_lock called on a destroyed mutex (0x7fff5a50aef8)'
10-21 19:59:10.521 4760 4760 F DEBUG : rax 0000000000000000 rbx 000000000000125e rcx 00007ffff45ecbf8 rdx 0000000000000006
10-21 19:59:10.521 4760 4760 F DEBUG : r8 00007fff5a50f228 r9 00007fff5a50f228 r10 00007fff5a50f228 r11 0000000000000246
10-21 19:59:10.521 4760 4760 F DEBUG : r12 0000000000000000 r13 0000007fff6f0810 r14 000000000000125e r15 00007fffffff61d8
10-21 19:59:10.521 4760 4760 F DEBUG : rdi 000000000000125e rsi 000000000000125e
10-21 19:59:10.521 4760 4760 F DEBUG : rbp 00007fffffff6370 rsp 00007fffffff61c8 rip 00007ffff45ecbf8
10-21 19:59:10.522 4760 4760 F DEBUG :
10-21 19:59:10.522 4760 4760 F DEBUG : backtrace:
10-21 19:59:10.522 4760 4760 F DEBUG : #00 pc 0000000000026bf8 /system/lib64/libc.so (syscall+24)
10-21 19:59:10.522 4760 4760 F DEBUG : #01 pc 000000000002a795 /system/lib64/libc.so (abort+101)
10-21 19:59:10.522 4760 4760 F DEBUG : #02 pc 0000000000091a5a /system/lib64/libc.so (__fortify_fatal(char const*, ...)+154)
10-21 19:59:10.522 4760 4760 F DEBUG : #03 pc 00000000000912c8 /system/lib64/libc.so (HandleUsingDestroyedMutex(pthread_mutex_t*, char const*)+40)
10-21 19:59:10.522 4760 4760 F DEBUG : #04 pc 00000000000911c2 /system/lib64/libc.so (pthread_mutex_lock+130)
10-21 19:59:10.522 4760 4760 F DEBUG : #05 pc 00000000000cb8c5 /data/app/fhuyakou.godot.app.android.gltfviewer-8MImF5AJfBPxe__d62MBpA==/lib/x86_64/libc++_shared.so (std::__ndk1::recursive
_mutex::lock()+5)
10-21 19:59:10.522 4760 4760 F DEBUG : #06 pc 0000000003736bbb /data/app/fhuyakou.godot.app.android.gltfviewer-8MImF5AJfBPxe__d62MBpA==/lib/x86_64/libgodot_android.so (offset 0xfb5000)
10-21 19:59:10.522 4760 4760 F DEBUG : #07 pc 0000007fff6f0810 <unknown>
10-21 19:59:10.653 1460 1460 E /system/bin/tombstoned: Tombstone written to: /data/tombstones/tombstone_35
10-21 19:59:10.654 1566 4763 I WindowManager: Screen frozen for +260ms due to AppWindowToken{3746217 token=Token{40c7696 ActivityRecord{ca421b1 u0 fhuyakou.godot.app.android.gltfviewer/.MainActi
vity t16}}}
10-21 19:59:10.655 1566 4763 W ActivityManager: Force finishing activity fhuyakou.godot.app.android.gltfviewer/.MainActivity
10-21 19:59:10.658 1566 1586 I BootReceiver: Copying /data/tombstones/tombstone_35 to DropBox (SYSTEM_TOMBSTONE)
10-21 19:59:10.690 1399 1399 I Zygote : Process 4702 exited due to signal (6)
```
| bug,platform:android,topic:porting,needs testing,crash | low | Critical |
2,602,373,601 | tensorflow | Overflow and Check fail in `tf.raw_ops.Conv2DBackpropInput` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17.0 tf2.16.1
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Overflow : `input_sizes` is a one-dimensional tensor and contains maximum values
Check fail:`input_sizes`'s shape is [2] and The dimension of `filter` is less than 2
### Standalone code to reproduce the issue
```shell
Overflow:
import tensorflow as tf
input_sizes = tf.constant([1, 5, 5, 999999999999], shape=[4], dtype=tf.int32)
filter_tensor = tf.constant(3, shape=[3, 3, 3, 2], dtype=tf.float32)
out_backprop = tf.constant(5, shape=[1, 5, 5, 2], dtype=tf.float32)
strides = [1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.Conv2DBackpropInput(
input_sizes=input_sizes,
filter=filter_tensor,
out_backprop=out_backprop,
strides=strides,
padding=padding
)
Check fail:
```python
import tensorflow as tf
input_sizes = tf.constant(1, shape=[2], dtype=tf.int32)
filter_tensor = tf.constant(2, shape=[1], dtype=tf.float32)
out_backprop = tf.constant(5, shape=[1, 5, 5, 2], dtype=tf.float32)
strides = [1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.Conv2DBackpropInput(
input_sizes=input_sizes,
filter=filter_tensor,
out_backprop=out_backprop,
strides=strides,
padding=padding
)
```
```
### Relevant log output
```shell
Overflow:
2024-10-21 11:40:49.417611: F tensorflow/core/kernels/mkl/mkl_conv_grad_input_ops.cc:578] Non-OK-status: tensor::MakeShape(input_tensor, &input_tf_shape)
Status: INVALID_ARGUMENT: Dimension -727379969 must be >= 0
Aborted (core dumped)
```
Check fail:
```
2024-10-21 11:16:29.937265: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (2 vs. 1)
Aborted (core dumped)
```
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,602,397,396 | pytorch | Tried to instantiate dummy base class | ### 🐛 Describe the bug
/Users/cizon/miniconda3/envs/s2s/lib/python3.10/site-packages/torch/_utils.py", line 912, in err_fn
raise RuntimeError(f"Tried to instantiate dummy base class {class_name}")
### Error logs
/Users/cizon/miniconda3/envs/s2s/lib/python3.10/site-packages/torch/_utils.py", line 912, in err_fn
raise RuntimeError(f"Tried to instantiate dummy base class {class_name}")
### Minified repro
_No response_
### Versions
2.4.0
cc @ezyang @chauhang @penguinwu | needs reproduction,triaged,module: testing | low | Critical |
2,602,414,922 | tensorflow | No kernels registered for op `Conv2DBackpropInputV2` | ### Issue type
Documentation Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf 2.16.1
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The existence of operator `Conv2DBackpropInputV2` is described in the official website document.https://www.tensorflow.org/api_docs/python/tf/raw_ops/Conv2DBackpropInputV2.
However, during my actual execution, the following error message appears:
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_sizes = tf.constant(1, shape=[4], dtype=tf.int32)
filter_tensor = tf.constant(2, shape=[4], dtype=tf.float32)
out_backprop = tf.constant(5, shape=[1, 5, 5, 2], dtype=tf.float32)
strides = [1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.Conv2DBackpropInputV2(
input=input_sizes,
filter=filter_tensor,
out_backprop=out_backprop,
strides=strides,
padding=padding
)
```
### Relevant log output
```shell
2024-10-21 12:30:18.492624: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
File "/mnt/tests/Conv2DBackpropInput.py", line 10, in <module>
tf.raw_ops.Conv2DBackpropInputV2(
File "/mnt/origin/venv/tensorflow-nightly/lib/python3.11/site-packages/tensorflow/python/util/tf_export.py", line 377, in wrapper
return f(**kwargs)
^^^^^^^^^^^
File "/mnt/origin/venv/tensorflow-nightly/lib/python3.11/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2030, in conv2d_backprop_input_v2
_ops.raise_from_not_ok_status(e, name)
File "/mnt/origin/venv/tensorflow-nightly/lib/python3.11/site-packages/tensorflow/python/framework/ops.py", line 5983, in raise_from_not_ok_status
raise core._status_to_exception(e) from None # pylint: disable=protected-access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.NotFoundError: Could not find device for node: {{node Conv2DBackpropInputV2}} = Conv2DBackpropInputV2[T=DT_INT32, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true]
All kernels registered for op Conv2DBackpropInputV2:
<no registered kernels>
[Op:Conv2DBackpropInputV2] name:
```
| type:docs-bug,stat:awaiting tensorflower,type:bug,comp:ops,TF 2.16 | medium | Critical |
2,602,425,187 | godot | Reflection probes greenish reflected light | ### Tested versions
Reproducible in 4.3 stable
### System information
Windows 10, Godot 4.3, gl compatibility mode with opengl 3
### Issue description
On linux (Ubuntu 24.04.1), in compatibility mode, reflection probes work correctly but on windows 10 using reflection probes in compatibility mode, apply to objects into the probe a green light. If the rendering driver is set to be opengl ES, the bug does not appear

Screenshot taken from windows 10
### Steps to reproduce
Create a 3d scene, add a reflection probe and a meshinstance 3d with a sphere or any other mesh selected and you will see a green light on the mesh.
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:rendering,needs testing,topic:3d | low | Critical |
2,602,454,334 | flutter | [video_player] Expand controls cross-platform for smoother video scrubbing | ### Use case
Package: video_player
Scrubbing a video slider that controls the positions of the video_player (by calling `seekTo`) works fine for iOS, but very sluggish for Android. It is so slow that at some point it looks like ExoPlayer is stacking the seekTo calls which never actually finishes until the user stop dragging the slider.
It turns out that Android underlying implementation uses ExoPlayer, which contains a property called "seekParameters" that defines how the seekTo should behave. The configuration is a tradeoff between speed and precision of the seekTo. The default configuration is EXACT (which means full precision but slowest possible).
### Proposal
I propose to expose this seekParameters so it is possible to pick a faster strategy that makes the video scrubbing on Android acceptable.
https://developer.android.com/reference/androidx/media3/exoplayer/SeekParameters
https://developer.android.com/reference/androidx/media3/exoplayer/ExoPlayer#setSeekParameters(androidx.media3.exoplayer.SeekParameters)
https://developer.android.com/reference/androidx/media3/exoplayer/ExoPlayer#getSeekParameters() | platform-android,p: video_player,package,c: proposal,P3,team-android,triaged-android | low | Major |
2,602,537,796 | next.js | Tree shaking not working for pages with transpilePackages | ### Link to the code that reproduces this issue
https://github.com/capJavert/nextjs-tree-shaking-test
### To Reproduce
1. Build application in production
2. Check `_app` bundle and it will contain `bla` const and other exports/imports from `shared/src/consts/common.ts` even though that const is only imported inside `/test` page
### Current vs. Expected behavior
We would expect `bla` to be tree shaken out of the main bundle (`_app`) and to only be included inside that page's bundle.
The above is just a small test but on a big application this has a major bundle size implications.
The shared package is also added to `transpilePackages` and `optimizePackageImports` and `sideEffects` is set to `false`. While above helps in some other cases we noticed that this setup does not allow `transpilePackages` to work.
We also tried moving shared to `web/src` and import directly and that correctly tree shakes the bundle and `bla` is not included inside `_app` so there must be some issue with `transpilePackages` (from our understanding).
Looking forward to getting more eyes on this.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.12.2
npm: 10.5.0
Yarn: 1.22.18
pnpm: 9.0.4
Relevant Packages:
next: 14.2.15 // Latest available version is detected (14.2.15).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Pages Router, Performance
### Which stage(s) are affected? (Select all that apply)
next build (local), Vercel (Deployed)
### Additional context
I did not test this on 15.x since it is next canary but don't expect it to change since 15.x is focused on new react features. | bug,Performance,Pages Router,linear: next | low | Major |
2,602,558,250 | kubernetes | controllers: Check if informers are synced on `/healthz`/`/readyz`? | ### What happened?
Right now the kube-apiserver has a readyz check if the informers are synced:
```
% kubectl get --raw='/readyz/informer-sync'
ok
```
The corresponding source code:
https://github.com/kubernetes/kubernetes/blob/948afe5ca072329a73c8e79ed5938717a5cb3d21/staging/src/k8s.io/apiserver/pkg/server/healthz/healthz.go#L95-L122
https://github.com/kubernetes/kubernetes/blob/948afe5ca072329a73c8e79ed5938717a5cb3d21/staging/src/k8s.io/apiserver/pkg/server/config.go#L897-L901
However, there is no such healthz/readyz check for controllers (kube-controller-manager, kube-scheduler, etc.).
I was wondering whether it would make sense to add such checks if informers are synced. Informers can get out of sync. If we bind this condition to the `/healthz` endpoint and then the `/healthz` endpoint to the liveness probe, then kubelet can restart the Pod if the informer is not synced.
I was wondering while working with informers should the `HasSynced` be checked only on startup or should it be checked periodically?
I am not an expert in the topic. Feel free to comment if the proposal makes sense or not.
kube-controller-manager's `/healthz` endpoint returns status for the various controllers:
```
$ curl -k https://localhost:10257/healthz?verbose
[+]leaderElection ok
[+]serviceaccount-token-controller ok
[+]cronjob-controller ok
[+]certificatesigningrequest-signing-controller ok
[+]node-lifecycle-controller ok
[+]endpointslice-controller ok
[+]daemonset-controller ok
[+]statefulset-controller ok
[+]bootstrap-signer-controller ok
[+]persistentvolumeclaim-protection-controller ok
[+]persistentvolume-protection-controller ok
[+]garbage-collector-controller ok
[+]job-controller ok
[+]deployment-controller ok
[+]node-ipam-controller ok
[+]persistentvolume-attach-detach-controller ok
[+]validatingadmissionpolicy-status-controller ok
[+]endpoints-controller ok
[+]certificatesigningrequest-cleaner-controller ok
[+]persistentvolume-binder-controller ok
[+]ttl-after-finished-controller ok
[+]ephemeral-volume-controller ok
[+]resourcequota-controller ok
[+]namespace-controller ok
[+]replicaset-controller ok
[+]ttl-controller ok
[+]legacy-serviceaccount-token-cleaner-controller ok
[+]endpointslice-mirroring-controller ok
[+]pod-garbage-collector-controller ok
[+]disruption-controller ok
[+]token-cleaner-controller ok
[+]persistentvolume-expander-controller ok
[+]replicationcontroller-controller ok
[+]serviceaccount-controller ok
[+]horizontal-pod-autoscaler-controller ok
[+]certificatesigningrequest-approving-controller ok
[+]clusterrole-aggregation-controller ok
[+]root-ca-certificate-publisher-controller ok
[+]taint-eviction-controller ok
healthz check passed
```
However, I am not sure if there is a meaningful check behind it.
According to https://github.com/kubernetes/kubernetes/blob/948afe5ca072329a73c8e79ed5938717a5cb3d21/cmd/kube-controller-manager/app/controllermanager.go#L779-L795, `controllerhealthz.NamedPingChecker` is being used as I don't see implementation of `controller.HealthCheckable` in the source code. It always returns `nil` (reports no issues; does not perform a meaningful check).
### What did you expect to happen?
Issues with the underlying informers to be reflected in `/healthz`/`/readyz` endpoints for controller components like kube-controller-manager, kube-proxy, etc.
### How can we reproduce it (as minimally and precisely as possible)?
1. Start kube-controller-manager with low qps and burst settings to reproduce informers out of sync issues.
```
--kube-api-qps=1
--kube-api-burst=1
```
2. Create a Deployment and scale it to 100 replicas
3. Make sure the kube-controller-manager logs are full with client-side throttling errors
```
I1021 13:24:04.441128 1 request.go:700] Waited for 25.992938012s due to client-side throttling, not priority and fairness, request: POST:https://kube-apiserver/api/v1/namespaces/default/pods
I1021 13:24:05.441182 1 request.go:700] Waited for 26.992997013s due to client-side throttling, not priority and fairness, request: POST:https://kube-apiserver/api/v1/namespaces/default/pods
```
and the issue is not reflected in the `/healthz`/`/readyz` endpoint.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Server Version: v1.31.1
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,sig/apps,lifecycle/stale,needs-triage | low | Critical |
2,602,567,813 | PowerToys | [Peek] Add user-configurable support for previewing plaintext files | ### Description of the new feature / enhancement
Note: this is based on a prior discussion on #34824 .
Peek discriminates supported files based on their file extension, and currently supports previewing a variety of plain text files, including source code files, .txt and so on.
However, this is currently a hard-coded list and cannot be edited or overridden by the user. Attempting to preview an unsupported file results in the summary information being displayed, not its contents.
This new feature would add support for previewing plaintext files, either by auto-detecting them at the time of preview (if we can find a reliable method), and/or by giving the user the ability to edit a list of text file extensions via Settings.
This would not replace the current method of editing the Monaco JSON file to add new languages or adding extensions to existing languages. It is proposed that if the same file extension were present in both the Monaco supported list and the user's setting, that the Monaco setting would take precedence; this is because the Monaco could also include support for syntax highlighting and other formatting improvements over a simple text-only preview.
### Scenario when this would be used?
This would be useful because power users often want to quickly preview plaintext files from a wide variety of both popular and niche applications, and also their own filetypes with custom extensions. It is infeasible for us to support all these by adding them manually for each PowerToys release.
### Supporting information
Requests for supporting new plaintext files are relatively common in the issues forum, e.g.:
#35515 - support .ion files
#34483 - support .ahk files
#33811 - support .csv files without the Office extension
etc.
The previous attempt at integrating this functionality also received supportive comments, so I believe there is demand.
| Needs-Triage | low | Minor |
2,602,574,686 | godot | Minor issue: Debugger gives a poor error message for wrongly indented _: on match statements | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - AMD Radeon RX 6600 (Advanced Micro Devices, Inc.; 31.0.24033.1003) - Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (8 Threads)
### Issue description
On a match statement, if the catch all ```_:``` is mis-aligned, eg
```
match variable:
"Something":
pass
_:
pass
```
The debugger gives the error
```Expected statement, found "_" instead.```
It took me a while to work out what I'd done wrong because the original match statement was off screen.
I think the error message would be clearer if the error were something like
```Expected statement, found "_:" instead. Check indent.```
(note this says ```"_:"``` instead of just ```"_"```)
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | enhancement,discussion,topic:gdscript,topic:editor,usability | low | Critical |
2,602,586,658 | angular | `readonly` input signals are not highlighted by the language service | ### Which @angular/* package(s) are the source of the bug?
language-service
### Is this a regression?
No
### Description
The Angular language service doesn't highlight `readonly` signal inputs as inputs. It also doesn't do this for decorator `@Inputs` but making those `readonly` doesn't make a lot of sense anyway.
Example code:
```ts
import { Component, Input, input } from '@angular/core';
import { bootstrapApplication } from '@angular/platform-browser';
@Component({
selector: 'app-signal-test',
standalone: true,
template: '',
})
export class SignalTestComponent {
public readonly readonlySignalInput = input<number>();
public regularSignalInput = input<number>();
@Input()
public readonly readonlyDecoratorInput: number | undefined;
@Input()
public regularDecoratorInput: number | undefined;
}
@Component({
selector: 'app-root',
standalone: true,
imports: [SignalTestComponent],
template: `<app-signal-test
[readonlySignalInput]="123"
[regularSignalInput]="123"
[readonlyDecoratorInput]="123"
[regularDecoratorInput]="123"
/>`,
})
export class PlaygroundComponent {}
bootstrapApplication(PlaygroundComponent);
```
Result when holding ctrl and hovering the readonly signal input in vscode:

Compared to the regular signal input:

`readonly` signal inputs also do not find any usage of them when clicking "Find All References".
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
```
Angular CLI: 18.2.9
Node: 20.14.0
Package Manager: npm 10.7.0
OS: linux x64
Angular: 18.2.8
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.9
@angular-devkit/build-angular 18.2.9
@angular-devkit/core 18.2.9
@angular-devkit/schematics 18.2.9
@angular/cdk 18.2.9
@angular/cli 18.2.9
@schematics/angular 18.2.9
rxjs 7.8.1
typescript 5.4.5
zone.js 0.14.10
```
### Anything else?
_No response_ | area: language-service | low | Critical |
2,602,587,073 | pytorch | CUDA Binary dependency chain is wrong, leading to bad binary packaging | tl;dr: the version of FindCUDAToolkit that we use [here](https://github.com/pytorch/pytorch/blob/8f3efb8797b7a2dbd958bf625374985793ed5035/cmake/Modules/FindCUDAToolkit.cmake) is old enough that we are missing quite a few cuda lib from it and the dependency between cuda libs is not accurate.
Loosely related, our script that updates the RPATH (the relative path used to find .so this depends on) within our .so to always look for the cuda installed within the pip package being shipped is broken and does not contain the appropriate entries for the newly added nvjitlink library.
This is the root cause of several user issues like https://github.com/pytorch/pytorch/issues/134929 and https://github.com/pytorch/pytorch/issues/131312 as far as I can tell.
And I can also observe it locally, where running `ldd` on the libtorch_cuda.so that is shipped with the PyTorch 2.5 binary on PyPi, I get entries ilke:
```
libcurand.so.10 => /usr/local/cuda/lib64/libcurand.so.10
libcublas.so.12 => /home/albandes/local/pytorch/3.10_release_binary_env/lib/python3.10/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.12
```
This mismatched libraries being picked up lead to arbitrary issues. The most common is when the installed binary for cuda 12.4 is installed with a global install <12.4. nvjitlink has been added as a dependency for libcusparse but not to our RPATH. Leading to the newer libcusparse being loaded with the global (old) nvjitlink.
How to fix this?
The most important fix and checks are:
- Fix the binary wheel script we use to generate binaries to properly add libnvjitlink to the appropriate RPATH. Making sure we load it from the installed python package and not from somewhere else.
- Add appropriate CI/Smoke test that ensures: each nvidia-* dependency we have has an appropriate RPATH entry in the appropriate .so (as checked with `readelf`).
- (BE) Add a global CUDA install to our smoke test machines. Use `ldd` on the generated .so to ensure each library is loaded from the right place. We could even make the global cuda install a fixed old version to reflect a lot of user setups.
- (BE) Clean up our RPATH script to only populate it on the relevant binaries. As of today all our .so that I tested have the full rpath, even things like _C.so which is the base cpython module with nothing else in it.
- (might be needed for 1?) Upgrade FindCUDATookit.cmake now that we use cmake>3.18, we should be able to use the latest from cmake.
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim | module: binaries,module: cuda,triaged | low | Critical |
2,602,604,876 | godot | Editor crash when updating NVIDIA Drivers: GPU process exited unexpectedly | ### Tested versions
- v4.4.dev.gh [44fa55234] (taken from [GitHub checks artifact](https://github.com/godotengine/godot/pull/83863/checks))
- v4.4.dev.custom_build [44fa55234]
- v4.4.dev.custom_build of af77100e394dcaca609b15bef815ed17475e51ed
- v4.3.stable.official [77dcf97d8]
- v4.0.stable.official [92bee43ad]
NVIDIA Driver versions:
- 32.0.15.6094
- 31.0.15.3598
### System information
Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 980 Ti (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i7-13700K (24 threads)
### Issue description
I was working in VS Code on GDScript changes and suddenly both of my monitors went black. It seems that my NVIDIA graphics drivers automatically updated in the background.
Here is the crash log from Godot:
[graphics_device_crash_godot_log-44fa55234.txt](https://github.com/user-attachments/files/17462773/graphics_device_crash_godot_log-44fa55234.txt)
And here's a highlight of the last messages:
```
...
at: RenderingDevice::draw_list_bind_uniform_set (servers\rendering\rendering_device.cpp:4176)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_bind_render_pipeline (servers\rendering\rendering_device.cpp:4081)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_set_push_constant (servers\rendering\rendering_device.cpp:4291)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_bind_vertex_array (servers\rendering\rendering_device.cpp:4214)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_bind_index_array (servers\rendering\rendering_device.cpp:4247)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_draw (servers\rendering\rendering_device.cpp:4313)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_bind_uniform_set (servers\rendering\rendering_device.cpp:4176)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_bind_render_pipeline (servers\rendering\rendering_device.cpp:4081)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_set_push_constant (servers\rendering\rendering_device.cpp:4291)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_bind_index_array (servers\rendering\rendering_device.cpp:4247)
ERROR: Parameter "dl" is null.
at: RenderingDevice::draw_list_draw (servers\rendering\rendering_device.cpp:4313)
ERROR: Immediate draw list is already inactive.
at: (servers\rendering\rendering_device.cpp:4526)
ERROR: Last known breadcrumb: BLIT_PASS
at: RenderingDeviceDriverVulkan::print_lost_device_info (drivers\vulkan\rendering_device_driver_vulkan.cpp:5121)
ERROR: VK_EXT_device_fault not available.
at: RenderingDeviceDriverVulkan::on_device_lost (drivers\vulkan\rendering_device_driver_vulkan.cpp:4987)
ERROR: Vulkan device was lost.
at: (drivers\vulkan\rendering_device_driver_vulkan.cpp:2457)
================================================================
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (44fa552343722bb048e2d7c6d3661174a95a8a3c)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] RenderingDeviceDriverVulkan::command_queue_execute_and_present (C:\Personal\Godot\godot_src\drivers\vulkan\rendering_device_driver_vulkan.cpp:2457)
[1] RenderingDevice::_submit_transfer_worker (C:\Personal\Godot\godot_src\servers\rendering\rendering_device.cpp:5133)
[2] RenderingDevice::_submit_transfer_workers (C:\Personal\Godot\godot_src\servers\rendering\rendering_device.cpp:5231)
[3] RenderingDevice::_end_frame (C:\Personal\Godot\godot_src\servers\rendering\rendering_device.cpp:5838)
[4] RenderingDevice::_flush_and_stall_for_all_frames (C:\Personal\Godot\godot_src\servers\rendering\rendering_device.cpp:5925)
[5] RenderingDevice::screen_prepare_for_drawing (C:\Personal\Godot\godot_src\servers\rendering\rendering_device.cpp:3722)
[6] RendererCompositorRD::blit_render_targets_to_screen (C:\Personal\Godot\godot_src\servers\rendering\renderer_rd\renderer_compositor_rd.cpp:37)
[7] RendererViewport::draw_viewports (C:\Personal\Godot\godot_src\servers\rendering\renderer_viewport.cpp:880)
[8] RenderingServerDefault::_draw (C:\Personal\Godot\godot_src\servers\rendering\rendering_server_default.cpp:88)
[9] RenderingServerDefault::draw (C:\Personal\Godot\godot_src\servers\rendering\rendering_server_default.cpp:417)
[10] Main::iteration (C:\Personal\Godot\godot_src\main\main.cpp:4410)
[11] OS_Windows::run (C:\Personal\Godot\godot_src\platform\windows\os_windows.cpp:1772)
[12] widechar_main (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:181)
[13] _main (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:206)
[14] main (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:220)
[15] WinMain (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:234)
[16] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[17] <couldn't map PC to fn name>
-- END OF BACKTRACE --
================================================================
```
Only the very first time that I had this crash, I also got this message after the backtrace log ([full log](https://github.com/user-attachments/files/17461882/graphics_device_crash_godot_log.txt)):
```
[12992:1021/091820.075:ERROR:gpu_process_host.cc(1001)] GPU process exited unexpectedly: exit_code=34
```
Looking at my Event Viewer, I see two of these messages:
```
NVIDIA OpenGL Driver:
The NVIDIA OpenGL driver has encountered
an out of memory error. This application might
behave inconsistently and fail.
(pid=11312 godot.windows.editor.dev.x86_64.exe 64bit)
```
...followed by this:
```
Application Error:
Faulting application name: godot.windows.editor.dev.x86_64.exe, version: 4.4.0.0, time stamp: 0x67125cbd
Faulting module name: godot.windows.editor.dev.x86_64.exe, version: 4.4.0.0, time stamp: 0x67125cbd
Exception code: 0x80000003
Fault offset: 0x000000000182e1e5
Faulting process id: 0x0x2C30
Faulting application start time: 0x0x1DB23BA05E6F76F
Faulting application path: C:\Personal\Godot\godot_src\bin\godot.windows.editor.dev.x86_64.exe
Faulting module path: C:\Personal\Godot\godot_src\bin\godot.windows.editor.dev.x86_64.exe
Report Id: 863ff4d3-d526-40b6-a3a5-8eb18f3d1ed1
Faulting package full name:
Faulting package-relative application ID:
```
...and this:
```
OVRServiceLauncher:
The operation completed successfully.
```
Before and after these events there are also a number of these messages that are likely(?) unrelated:
```
DeviceSetupManager:
Metadata staging failed, result=0x80070490 for container '{0E519C86-B75E-5F27-940D-77196A1CDB86}'
```
The OVRServiceLauncher is my Oculus Rift/Meta Quest related app that I have not used in many months, so I expect this is unrelated, but it shows that other services were affected by this NVIDIA driver crash.
I see these notices in the Event Viewer for Device Manager - NIVIDA GeForce GTX 980 Ti, likely related to the auto-update:
```
Information:
Driver Management has concluded the process to add Service nvlddmkm for Device Instance ID PCI\VEN_10DE&DEV_17C8&SUBSYS_32331462&REV_A1\4&256A0AA8&0&0008 with the following status: 0.
```
```
Information:
Driver Management has concluded the process to add Service NVDisplay.ContainerLocalSystem for Device Instance ID PCI\VEN_10DE&DEV_17C8&SUBSYS_32331462&REV_A1\4&256A0AA8&0&0008 with the following status: 0.
```
When using older versions of the editor, it doesn't crash, but instead hangs while spamming these messages:
```
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
ERROR: Mismatch fragment shader output mask (1) and framebuffer color output mask (0) when binding both in render pipeline.
at: (drivers/vulkan/rendering_device_vulkan.cpp:6014)
ERROR: Condition "pipeline.is_null()" is true. Returning: RID()
at: _generate_version (servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61)
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
ERROR: Mismatch fragment shader output mask (1) and framebuffer color output mask (0) when binding both in render pipeline.
at: (drivers/vulkan/rendering_device_vulkan.cpp:6014)
ERROR: Condition "pipeline.is_null()" is true. Returning: RID()
at: _generate_version (servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61)
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
ERROR: Mismatch fragment shader output mask (1) and framebuffer color output mask (0) when binding both in render pipeline.
at: (drivers/vulkan/rendering_device_vulkan.cpp:6014)
ERROR: Condition "pipeline.is_null()" is true. Returning: RID()
at: _generate_version (servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61)
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
ERROR: Mismatch fragment shader output mask (1) and framebuffer color output mask (0) when binding both in render pipeline.
at: (drivers/vulkan/rendering_device_vulkan.cpp:6014)
ERROR: Condition "pipeline.is_null()" is true. Returning: RID()
at: _generate_version (servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61)
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
ERROR: Mismatch fragment shader output mask (1) and framebuffer color output mask (0) when binding both in render pipeline.
at: (drivers/vulkan/rendering_device_vulkan.cpp:6014)
ERROR: Condition "pipeline.is_null()" is true. Returning: RID()
at: _generate_version (servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61)
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
ERROR: Mismatch fragment shader output mask (1) and framebuffer color output mask (0) when binding both in render pipeline.
at: (drivers/vulkan/rendering_device_vulkan.cpp:6014)
ERROR: Condition "pipeline.is_null()" is true. Returning: RID()
at: _generate_version (servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61)
ERROR: Condition "!dl" is true.
at: draw_list_bind_render_pipeline (drivers/vulkan/rendering_device_vulkan.cpp:7118)
ERROR: Condition "!dl" is true.
at: draw_list_bind_uniform_set (drivers/vulkan/rendering_device_vulkan.cpp:7192)
ERROR: Condition "!dl" is true.
at: draw_list_set_push_constant (drivers/vulkan/rendering_device_vulkan.cpp:7299)
ERROR: Condition "!dl" is true.
at: draw_list_bind_index_array (drivers/vulkan/rendering_device_vulkan.cpp:7265)
ERROR: Condition "!dl" is true.
at: draw_list_draw (drivers/vulkan/rendering_device_vulkan.cpp:7317)
ERROR: Condition "!E" is true. Returning: TEXTURE_SAMPLES_1
at: framebuffer_format_get_texture_samples (drivers/vulkan/rendering_device_vulkan.cpp:4116)
```
### Steps to reproduce
- Create a new empty project with the Forward+ renderer
- Roll back or Update NVIDIA graphics drivers through the Windows Device Manager via Windows Update (or have Windows Update do this automatically in the background)
### Minimal reproduction project (MRP)
- Any new empty project with the Forward+ renderer
- The Project Manager, before opening a project (Does not give stack trace for some reason...)
- Untested: other renderers | bug,platform:windows,topic:rendering,needs testing | low | Critical |
2,602,698,295 | vscode | gif/video support in chat | It will be great when we have image support in core. It would be nice to also have gif/video support. For example, a dev could attach a gif to the release notes and generate alt text for it, or a user could attach a screen recording and ask for debugging help. | feature-request,panel-chat | low | Critical |
2,602,728,560 | pytorch | `torch.special.zeta` ignores `nan` input when `other=-inf`. | ### 🐛 Describe the bug
When input is `nan`, the output should also be `nan` regardless of the value of `other`.
```python
import numpy as np
import torch
input = torch.tensor(np.nan, dtype=torch.float64)
other = torch.tensor(1, dtype=torch.float64)
out = torch.special.zeta(input, other)
print(out) # tensor(nan, dtype=torch.float64)
input = torch.tensor(np.nan, dtype=torch.float64)
other = torch.tensor(-np.inf, dtype=torch.float64)
out = torch.special.zeta(input, other)
print(out) # tensor(inf, dtype=torch.float64) actual, nan expected
```
BTW, I found that the output will be `nan` when `input = torch.tensor(np.inf, dtype=torch.float64)` i.e. positive inf.
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 16.0.1 (https://github.com/llvm/llvm-project.git cd89023f797900e4492da58b7bed36f702120011)
CMake version: version 3.23.2
Libc version: glibc-2.34
Python version: 3.9.18 (main, Aug 23 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 11.2.67
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA TITAN RTX
GPU 2: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU(s) scaling MHz: 92%
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @mruberry @kshitij12345 @albanD | triaged,module: NaNs and Infs,module: special,module: python frontend,module: edge cases | low | Critical |
2,602,736,778 | deno | ci: use setup-deno@v2 | Replace all instances of [`setup-deno@v1`](https://github.com/search?q=repo%3Adenoland%2Fdeno%20setup-deno%40v1&type=code) with `setup-deno@v2` and make sure the scripts work with Deno 2 (they mostly should).
| good first issue,build | low | Minor |
2,602,789,882 | pytorch | get_model_state_dict failed after FSDP_model.to(dtype) | ### 🐛 Describe the bug
```python
# fsdp_model with mixed precision (fp32 parameters)
fsdp_model.to(torch.bfloat16)
save_policy = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
with FSDP.state_dict_type(model, StateDictType.FULL_STATE_DICT, save_policy):
return fsdp_model.state_dict()
```
The above code fails with error:
```
[rank0]: Traceback (most recent call last):
......
[rank0]: File "xxx", line 118, in xxx
[rank0]: return model.state_dict()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2199, in state_dict
[rank0]: hook(self, prefix, keep_vars)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_state_dict_utils.py", line 782, in _pre_state_dict_hook
[rank0]: _pre_state_dict_hook_fn[fsdp_state._state_dict_type](
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_state_dict_utils.py", line 303, in _full_pre_state_dict_hook
[rank0]: _common_unshard_pre_state_dict_hook(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_state_dict_utils.py", line 170, in _common_unshard_pre_state_dict_hook
[rank0]: _enter_unshard_params_ctx(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_state_dict_utils.py", line 134, in _enter_unshard_params_ctx
[rank0]: fsdp_state._unshard_params_ctx[module].__enter__()
[rank0]: File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
[rank0]: return next(self.gen)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_unshard_param_utils.py", line 198, in _unshard_fsdp_state_params
[rank0]: _unshard(state, handle, computation_stream, computation_stream)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_runtime_utils.py", line 301, in _unshard
[rank0]: handle.unshard()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_flat_param.py", line 1312, in unshard
[rank0]: padded_unsharded_flat_param = self._all_gather_flat_param(unsharded_flat_param)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_flat_param.py", line 1403, in _all_gather_flat_param
[rank0]: dist.all_gather_into_tensor(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 83, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 3410, in all_gather_into_tensor
[rank0]: work = group._allgather_base(output_tensor, input_tensor, opts)
[rank0]: TypeError: output tensor must have the same type as input tensor
```
### Versions
torch 2.4+
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang | oncall: distributed,triaged,module: fsdp | medium | Critical |
2,602,827,367 | langchain | AzureSearch. Error when program terminates | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
#Failing code:
import os
#loading environment variables from .env file
from dotenv import load_dotenv
load_dotenv()
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_openai import AzureOpenAIEmbeddings
from azure.search.documents.indexes.models import (
ScoringProfile,
SearchableField,
SearchField,
SearchFieldDataType,
SimpleField,
TextWeights,
)
# This module is responsible for integration with Azure Search and uses Langchain framework for this
# It contains following functions:
# search - search for similar documents in Azure Search. return top 5 results
# ingest - gets as parameters a list of documents(chunks) and metadata per document and ingests them into Azure Search
# Azure Search configuration
AZURE_SEARCH_SERVICE_ENDPOINT = os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT")
AZURE_SEARCH_API_KEY = os.getenv("AZURE_SEARCH_API_KEY")
AZURE_SEARCH_INDEX_NAME = os.getenv("AZURE_SEARCH_INDEX_NAME")
# Azure OpenAI configuration
AZURE_OPENAI_KEY = os.getenv("AZURE_OPENAI_KEY")
AZURE_OPENAI_DEPLOYMENT = os.getenv("AZURE_OPENAI_DEPLOYMENT")
AZURE_OPENAI_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
AZURE_OPENAI_API_VERSION = os.getenv("AZURE_OPENAI_API_VERSION")
# initialize AzureOpenAIEmbeddings
embeddings: AzureOpenAIEmbeddings = \
AzureOpenAIEmbeddings(azure_deployment=AZURE_OPENAI_DEPLOYMENT,
openai_api_version=AZURE_OPENAI_API_VERSION,
azure_endpoint=AZURE_OPENAI_ENDPOINT,
api_key=AZURE_OPENAI_KEY)
#define search index custom schema
fields = [
SimpleField(
name="chunk_id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SimpleField(
name="parent_id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="chunk",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="text_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(embeddings.embed_query("Text")),
vector_search_profile_name="myHnswProfile",
),
# Additional field to store the title
SearchableField(
name="title",
type=SearchFieldDataType.String,
searchable=True,
),
]
#create Langchain AzureSearch object
vector_search: AzureSearch = \
AzureSearch(azure_search_endpoint=AZURE_SEARCH_SERVICE_ENDPOINT,
azure_search_key=AZURE_SEARCH_API_KEY,
index_name=AZURE_SEARCH_INDEX_NAME,
embedding_function=embeddings.embed_query,
# Configure max retries for the Azure client
additional_search_client_options={"retry_total": 3},
fields=fields,
)
# ingest - gets as parameters a list of documents(chunks) and metadata per document and ingests them into Azure Search
#TODO - implement async version of ingest
def ingest(documents: list, metadata):
#check the input is valid list and non empty if not return exception
if not isinstance(documents, list) or not documents:
raise ValueError("Input must be a non-empty list")
if not isinstance(metadata, list) or not metadata:
raise ValueError("Metadata must be a non-empty list")
if len(documents) != len(metadata):
raise ValueError("Documents and metadata must be of the same length")
# Ingest documents into Azure Search
vector_search.add_documents(documents, metadata)
def search(query: str, search_type='similarity', top_k=5):
#check the input is valid string and non empty if not raise exception
if not isinstance(query, str) or not query:
raise ValueError("Search query must be a non-empty string")
# Search for similar documents
docs = vector_search.similarity_search(query=query, k=top_k, search_type=search_type)
return docs[0].page_content
docs = search("Waht is Microsoft's Fabric?", search_type='hybrid', top_k=5)
### Error Message and Stack Trace (if applicable)
Exception ignored in: <function AzureSearch.__del__ at 0x123c86020>
Traceback (most recent call last):
File "/Users/vladfeigin/myprojects/dai-demos/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py", line 393, in __del__
File "/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/events.py", line 765, in get_event_loop_policy
File "/opt/homebrew/Cellar/python@3.11/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/events.py", line 758, in _init_event_loop_policy
ImportError: sys.meta_path is None, Python is likely shutting down
### Description
Running AzureSearch , hybrid search. The program executes properly but fails on termination
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
> Python Version: 3.11.10 (main, Sep 7 2024, 01:03:31) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.16
> langsmith: 0.1.136
> langchain_openai: 0.1.23
> langchain_text_splitters: 0.2.4
> langchainhub: 0.1.21
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.44.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.34
> tenacity: 8.5.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20240907
> typing-extensions: 4.12.2 | Ɑ: vector store | low | Critical |
2,602,829,646 | pytorch | os.environ modification is not thread safe | ### 🐛 Describe the bug
Modifications to os.environ affect a process global data structure (the environment struct), and are therefore not thread safe.
Here is a quick and dirty grep for environ modification, excluding results from torch/_testing.py. Some of these are OK to ignore as they are top-level, I haven't carefully audited them yet.
```
$ git grep "os.environ.*\s=\s" torch
torch/__init__.py: os.environ["PATH"] = ";".join(dll_paths + [os.environ["PATH"]])
torch/__init__.py: os.environ["DISABLE_CUPTI_LAZY_REINIT"] = "1"
torch/__init__.py: os.environ["TEARDOWN_CUPTI"] = "0"
torch/_inductor/autotune_process.py: os.environ[CUDA_VISIBLE_DEVICES] = str(device)
torch/_inductor/autotune_process.py: os.environ[CUDA_VISIBLE_DEVICES] = current
torch/_inductor/codecache.py: os.environ["_TORCHINDUCTOR_PYOBJECT_TENSOR_DATA_PTR"] = str(
torch/_inductor/cpp_builder.py: os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
torch/_inductor/cpp_builder.py: os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
torch/_inductor/cpp_builder.py: os.environ["CUDA_HOME"] = build_paths.sdk_home
torch/_inductor/cpp_builder.py: os.environ["CUDA_HOME"] = build_paths.sdk_home
torch/_inductor/fx_passes/numeric_utils.py:os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
torch/_inductor/runtime/cache_dir_utils.py: os.environ["TORCH_COMPILE_CACHE_DIR"] = cache_dir
torch/_inductor/runtime/compile_tasks.py: os.environ["TRITON_PTXAS_PATH"] = ptxas_path
torch/_inductor/runtime/triton_heuristics.py: os.environ["TRITON_CACHE_DIR"] = os.path.join(
torch/backends/xeon/run_cpu.py: os.environ["LD_PRELOAD"] = os.pathsep.join(
torch/backends/xeon/run_cpu.py: os.environ[env_name] = env_value
torch/backends/xeon/run_cpu.py: os.environ["LD_PRELOAD"] = ":".join(lst_valid)
torch/backends/xeon/run_cpu.py: os.environ["LD_PRELOAD"] = ""
torch/cuda/__init__.py: os.environ["CUDA_MODULE_LOADING"] = "LAZY"
torch/cuda/__init__.py: >>> os.environ['CUDA_PROFILE'] = '1'
torch/distributed/elastic/multiprocessing/api.py: os.environ[k] = v
torch/distributed/run.py: os.environ["OMP_NUM_THREADS"] = str(omp_num_threads)
torch/profiler/profiler.py: os.environ["DISABLE_CUPTI_LAZY_REINIT"] = "1"
torch/profiler/profiler.py: os.environ["TEARDOWN_CUPTI"] = "0"
```
Also, using mock.patch to modify the environ in a scoped way:
```
$ git grep "mock.patch.*os.environ" torch
torch/_inductor/utils.py: with mock.patch.dict(os.environ, {"TRITON_CACHE_DIR": triton_cache_dir}):
torch/testing/_internal/logging_utils.py: settings_patch = unittest.mock.patch.dict(os.environ, {"TORCH_LOGS": settings})
torch/testing/_internal/logging_utils.py: unittest.mock.patch.dict(os.environ, {"___LOG_TESTING": ""})
```
also these two harder to find sites:
```
torch/_inductor/aoti_eager.py: os.environ,
torch/_inductor/utils.py: os.environ, {"TORCHINDUCTOR_CACHE_DIR": inductor_cache_dir}
```
Environment variables may be modified for several reasons:
* To modify the environment variable that will be passed on subprocess invocation. In this case, the environment setting can be eliminated in favor of explicitly plumbing the environment variable changes to the subprocess invocations (possibly using TLS, if manual plumbing is not feasible
* To affect the behavior of preexisting code (potentially in a library PyTorch links against) which only consults the environment variable. If the envvar read happens in third party code (e.g., as in KMP) is no easy way to solve this, we can only patch environment on a global basis.
## Case study: `TORCHINDUCTOR_CACHE_DIR` patching
torch/_inductor/aoti_eager.py patches environment to temporarily modify the cache directory location for AOTInductor lowering. This is because the inductor cache dir is currently only via an environment variable. If we introduce TLS that can be patched to modify this value temporarily, we must also take care to ensure that AOTI isn't relying on this value to be propagated to subprocess calls to the compiler. From this example, it is also clear that the local patching of the variable must *override* the environment variable.
### Versions
main
cc @albanD | triaged,module: multithreading | low | Critical |
2,602,850,446 | PowerToys | 无界鼠标同步声音 | ### Description of the new feature / enhancement
希望无界鼠标能支持同步声音选项
### Scenario when this would be used?
方便一个音频设备使用不同的pc
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,602,854,075 | flutter | Consider removing bot_update step in orchestrator when there is no work to do | In this build (https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20mac_clangd/2928/overview), it looks like there are no global tests or generators, so the bot_update step seems like it shouldn't be needed. | team-release | low | Minor |
2,602,860,027 | ollama | llama3.1 llama3.2 Chat Template Typo | ### What is the issue?
It seems there is a typo in the following sentence of the chat template:
"When you receive a tool call response, use the output to format an answer to the **orginal** user question."
llama3.1: [948af2743fc7](https://ollama.com/library/llama3.1/blobs/948af2743fc7)
llama3.2: [966de95ca8a6](https://ollama.com/library/llama3.2/blobs/966de95ca8a6)
The word "original" is misspelled as "orginal".
Although LLMs typically maps the meaning of typos to normal phrases, there are unknown effects in the actual results of all generation possibilities.
Interestingly, the typo appears to have originated from the official example of llama3.1 model card and has been widely used across many web resources.
https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug | low | Minor |
2,602,868,871 | TypeScript | Can extends [any, any] but not generic which is exactly [any, any] (woks till 5.3.3 breaks from 5.4.5) | ### 🔎 Search Terms
generic, tuple extends
### 🕗 Version & Regression Information
This code works in the playground with typescript version 5.3.3 but breaks since version 5.4.5 (version between are not available in the playground)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.3#code/C4TwDgpgBAIghsOUC8UDacBcUCMAaKAI2wCYCA6SgY2wGY0BdBgWACg3RIoAVCAZ2DdwEHAB5uUCAA9gEAHYATPlDhyQjAHwoobKHp6SZ8pekrkAlnIBmEAE5QASodmLlGNQVUgGFSl8YMUAD8jlDYchAAbnYcwlCyAjjavAJCkGLwiBpsAPQ5+gB6QWyxXCmCwiTizsbK-j5QALJwfADWNa4qaoza7iCeagxaqLr6EtIuJmhmljb2ThO1TS2tvuT1gSFO4VExrJzQCcAkyfwVkFWZcNmseYVBQA
### 💻 Code
```ts
type Data = [a: 1, b: 2, ...c: 3[]]
type TestType1<T extends any[]> =
T extends [...infer R extends [any, any], ...any[]] ? R : never
type test1 = TestType1<Data>
// ^?
type TestType2<T extends any[], Mask extends any[] = [any, any]> =
T extends [...infer R extends Mask, ...any[]] ? R : never
type test2 = TestType2<Data>
// ^?
```
### 🙁 Actual behavior
When passing the `Mask ` as a generic and trying to extends the tuple with it to extract the relevant part, the resulting tuple is well formed, but item types are replaced with `any`
### 🙂 Expected behavior
Expect that constraint on infer to works wether as an explicit form or a generic holding a type with the exact same form, which used to be the case priori to typescript 5.4 bu breaks since
### Additional information about the issue
_No response_ | Bug | low | Minor |
2,602,874,521 | go | net: TestDialerLocalAddr failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestDialerLocalAddr"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8733506969795613409)):
=== RUN TestDialerLocalAddr
dial_test.go:628: tcp [::]:0->127.0.0.1: got dial tcp [::]:0->127.0.0.1:65180: connect: connection timed out; want <nil>
--- FAIL: TestDialerLocalAddr (75.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,602,894,677 | neovim | image API |
# Problem
No API to load images as bytes and disiplay them using e.g. kitty image protocol.
# Expected behavior
- `vim.ui.img` (is `vim.ui` the right place for this? or `vim.os`, or ...?)
- Could shell out to `imagemagick` CLI. For reference, https://github.com/3rd/image.nvim uses FFI but that seems unnecessary.
- `show`
- `load`
# Related
- https://github.com/neovim/neovim/issues/24164
- https://github.com/neovim/neovim/issues/27119 | enhancement,api,provider,lua | high | Critical |
2,602,908,085 | react | Bug: multiple useTransition hook in a page turn to activate first one | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 18.0.0
## Steps To Reproduce
As below codesandbox link
1. Click StartTransition2 button
2. see console show 'isPending true; isPending false'
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example: https://codesandbox.io/p/sandbox/usetransition-issue-68tfls?file=/src/App.js
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
1. Click StartTransition2 button
2. see console show 'isPending true; isPending false'
## The expected behavior
1. Click StartTransition2 button
2. see console show 'isPending2 true; isPending2 false'
## Note
Actually, I'm not really sure it's a bug or mine misunderstanding.
While hook is executed in function component by order (e.g. useState), each state hook memorize its own value, useTransition hook not do this in same way. I found it in another project, and simplify to codesandbox. The reason why I use two because I have two section in viewport relying on two sources (API) with different responding speed. I intend to let first response data show immediately and the other keep loading status by useTransition.
| Status: Unconfirmed,Resolution: Stale | low | Critical |
2,602,915,490 | TypeScript | Formatting Intellisense For Better JS/TS Doc Integration | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
Does anyone know if it is possible to modify VSCode Intellisense for JSDoc and TSDoc tags? I have always been bothered by the way VSCode handles certain JS/TS doc tags and I just cannot handle it anymore.
As trivial as it might sound, when documenting custom `interface` and `type` definitions, I just want the `@property` tag to be formatted in the same manner as the `@param` tag. The following screen shots illustrate my issue:

_Notice the emdash `—` between the `@property` tags and the keys being described above?_

_See how it's placed after the `@param` tags and the respective keys?_
If this feature cannot be implemented, does anyone know of any extensions that already do this or which files need to be modified to make this change?
Any advice would be much appreciated. Cheers 🍻 | Suggestion,Experience Enhancement | low | Minor |
2,602,960,277 | go | x/net/http2: discrepancies in lost PING handling between Server and Transport | ### Go version
x/net v0.30.0
### Output of `go env` in your module/workspace:
```shell
Not important.
```
### What did you do?
I configured http2.Server with
* ReadIdleTimeout
* PingTimeout
* CountError
and caused PING to be lost.
### What did you see happen?
The newly added support for lost PING in server:
* only logs lost ping in verbose mode while closes the underlying connection - IMO should be logged always
* does not invoke CountError
```go
if sc.pingSent {
sc.vlogf("timeout waiting for PING response")
sc.conn.Close()
return
}
```
https://cs.opensource.google/go/x/net/+/refs/tags/v0.30.0:http2/server.go;l=1047
vs
```go
func (cc *ClientConn) closeForLostPing() {
err := errors.New("http2: client connection lost")
if f := cc.t.CountError; f != nil {
f("conn_close_lost_ping")
}
cc.closeForError(err)
}
```
https://cs.opensource.google/go/x/net/+/refs/tags/v0.30.0:http2/transport.go;l=1159
### What did you expect to see?
I expect to see
* `ErrorLog` called with lost ping message
* `CountError` called with `conn_close_lost_ping` | NeedsFix | low | Critical |
2,603,010,694 | deno | Deno LSP ignoring completeFunctionCalls setting | It appears that the Deno Language Server Protocol (LSP) is not respecting the typescript.suggest.completeFunctionCalls and javascript.suggest.completeFunctionCalls settings in the `.vscode/settings.json` file. Despite setting these options to false, the imports are always being completed with the full function call, which is not the desired behavior.
## Steps to Reproduce
1. Open a Deno project in Visual Studio Code.
2. Set the following options in the `.vscode/settings.json` file:
```json
{
"deno.enable": true,
"deno.lint": true,
"editor.formatOnSave": true,
"deno.codeLens.references": false,
"deno.codeLens.implementations": false,
"editor.defaultFormatter": "denoland.vscode-deno",
"javascript.suggest.completeFunctionCalls": false,
"typescript.suggest.completeFunctionCalls": false,
"editor.indentSize": 2,
"editor.insertSpaces": true,
"editor.detectIndentation": false,
"[typescript]": {
"editor.defaultFormatter": "denoland.vscode-deno"
},
"[html]": {
"editor.defaultFormatter": "denoland.vscode-deno"
}
}
```
3. Attempt to import a function from a module.




## Expected Behavior
The import should be added without the full function call, respecting the settings specified in the `.vscode/settings.json` file.
## Actual Behavior
The import is always completed with the full function call, ignoring the `typescript.suggest.completeFunctionCalls` and `javascript.suggest.completeFunctionCalls` settings.
## Environment
Deno: 2.0.2
| bug,needs info,lsp | low | Minor |
2,603,052,165 | go | cmd/cgo/internal/testcarchive: TestCompileWithoutShared failures | ```
#!watchflakes
default <- pkg == "cmd/cgo/internal/testcarchive" && test == "TestCompileWithoutShared"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8733462684540614305)):
=== RUN TestCompileWithoutShared
carchive_test.go:1094: [go build -buildmode=c-archive -gcflags=-shared=false -o libgo2.a ./libgo2]
# testcarchive/libgo2
/Users/swarming/.swarming/wocebm_48/ir/x/w/goroot/pkg/tool/darwin_amd64/link: running /Users/swarming/.swarming/wocebm_48/ir/cache/tools/15e204a/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ar failed: exit status 1
/Users/swarming/.swarming/wocebm_48/ir/cache/tools/15e204a/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: $WORK/b001/exe/a.out.a(000002.o) has no symbols
/Users/swarming/.swarming/wocebm_48/ir/cache/tools/15e204a/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: $WORK/b001/exe/a.out.a(000003.o) has no symbols
fatal error: /Users/swarming/.swarming/wocebm_48/ir/cache/tools/15e204a/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: can't write to output file (Device error)
/Users/swarming/.swarming/wocebm_48/ir/cache/tools/15e204a/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ar: internal ranlib command failed
carchive_test.go:1096: exit status 1
--- FAIL: TestCompileWithoutShared (4.55s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,603,078,308 | go | proposal: x/sys/windows: console input event union member types | ### Proposal Details
On Windows, reading console events can be achieved by either using [Console Virtual Terminal Sequences](https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences) similar to Unix terminals, or by using the [Windows Console API](https://learn.microsoft.com/en-us/windows/console/readconsoleinput) and [Buffer Events](https://learn.microsoft.com/en-us/windows/console/reading-input-buffer-events).
The former only supports basic key/mouse events (no release or unambiguous keys). Because Windows Consoles are not traditional TTYs, and they don't support `SIGWINCH` signals, we cannot listen to window resize events without polling the console.
Thus, it's more beneficial to work with the latter API when dealing with terminals and consoles on Windows. Using the Console API, we can listen for window resize events, and mouse/keyboard release and unambiguous events (such as <kbd>ctrl+i</kbd> vs <kbd>tab</kbd>).
The Windows Console API defines [input record](https://learn.microsoft.com/en-us/windows/console/input-record-str) events as a `union` type. And since Go doesn't support unions, we need a way to decode and access field members. The [proposed change](https://github.com/golang/sys/pull/228) ([CL 621496](https://go-review.googlesource.com/c/sys/+/621496)) is to use `encoding/binary` to decode the event into its respective type using member functions.
```c
typedef struct _INPUT_RECORD {
WORD EventType;
union {
KEY_EVENT_RECORD KeyEvent;
MOUSE_EVENT_RECORD MouseEvent;
WINDOW_BUFFER_SIZE_RECORD WindowBufferSizeEvent;
MENU_EVENT_RECORD MenuEvent;
FOCUS_EVENT_RECORD FocusEvent;
} Event;
} INPUT_RECORD;
```
Becomes
```go
type InputRecord struct {
EventType uint16
_ [2]byte
Event [16]byte
}
func (ir InputRecord) FocusEvent() FocusEventRecord {
return FocusEventRecord{SetFocus: ir.Event[0] > 0}
}
func (ir InputRecord) KeyEvent() KeyEventRecord {
return KeyEventRecord{
KeyDown: binary.LittleEndian.Uint32(ir.Event[0:4]) > 0,
RepeatCount: binary.LittleEndian.Uint16(ir.Event[4:6]),
VirtualKeyCode: binary.LittleEndian.Uint16(ir.Event[6:8]),
VirtualScanCode: binary.LittleEndian.Uint16(ir.Event[8:10]),
Char: rune(binary.LittleEndian.Uint16(ir.Event[10:12])),
ControlKeyState: binary.LittleEndian.Uint32(ir.Event[12:16]),
}
}
func (ir InputRecord) MouseEvent() MouseEventRecord {
return MouseEventRecord{
MousePositon: Coord{
X: int16(binary.LittleEndian.Uint16(ir.Event[0:2])),
Y: int16(binary.LittleEndian.Uint16(ir.Event[2:4])),
},
ButtonState: binary.LittleEndian.Uint32(ir.Event[4:8]),
ControlKeyState: binary.LittleEndian.Uint32(ir.Event[8:12]),
EventFlags: binary.LittleEndian.Uint32(ir.Event[12:16]),
}
}
func (ir InputRecord) WindowBufferSizeEvent() WindowBufferSizeRecord {
return WindowBufferSizeRecord{
Size: Coord{
X: int16(binary.LittleEndian.Uint16(ir.Event[0:2])),
Y: int16(binary.LittleEndian.Uint16(ir.Event[2:4])),
},
}
}
func (ir InputRecord) MenuEvent() MenuEventRecord {
return MenuEventRecord{
CommandID: binary.LittleEndian.Uint32(ir.Event[0:4]),
}
}
```
Discussed in https://github.com/golang/sys/pull/196
Related https://github.com/golang/sys/pull/227
Related https://github.com/golang/sys/pull/228 | OS-Windows,Proposal | low | Minor |
2,603,140,963 | react-native | `KeyboardAvoidingView` with translucent status bar leaves space at the bottom of the screen on Android | ### Description
When `android:windowTranslucentStatus` is set to `true`, `KeyboardAvoidingView` displays a white space at the bottom of the screen after the soft keyboard closes.
This is the same as https://github.com/facebook/react-native/issues/27526.
### Steps to reproduce
1. Build the reproducer for Android with New Arch enabled.
```Bash
cd ReproducerApp
yarn
yarn android
```
2. Start Metro.
```Bash
npx react-native start
```
3. Launch the app.
4. Tap the text to open the soft keyboard.
5. Close the soft keyboard.
### React Native Version
0.75.4
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: Linux 5.15 Linux Mint 21.3 (Virginia)
CPU: (6) x64 AMD Ryzen 5 3500X 6-Core Processor
Memory: 24.93 GB / 39.16 GB
Shell:
version: 5.1.16
path: /bin/bash
Binaries:
Node:
version: 20.15.1
path: ~/.nvm/versions/node/v20.15.1/bin/node
Yarn:
version: 3.6.4
path: ~/.nvm/versions/node/v20.15.1/bin/yarn
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v20.15.1/bin/npm
Watchman:
version: 20231008.002904.0
path: /usr/local/bin/watchman
SDKs:
Android SDK:
API Levels:
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 30.0.3
- 33.0.1
- 34.0.0
- 35.0.0
System Images:
- android-34 | Intel x86_64 Atom
- android-34 | Google APIs Intel x86_64 Atom
Android NDK: Not Found
IDEs:
Android Studio: Not Found
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby: Not Found
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: 0.75.4
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://github.com/QichenZhu/reproducer-react-native-keyboard-avoiding-view-android
### Screenshots and Videos
https://github.com/user-attachments/assets/6eaf5656-9617-469c-bbf0-e461277a4612 | Platform: Android,Component: KeyboardAvoidingView,API: Keyboard | low | Major |
2,603,161,599 | kubernetes | Deletion of resources can fail due to etcd size limit | Steps:
- Create a resource that is just barely within the etcd size limit
- Delete the resource, which triggers an update to etcd to record the intent to delete the resource
- Because the update adds fields like deleteTimestamp, the size of the resource increases
- When the size exceeds the etcd size limit, the update-to-record-intent-to-delete fails to be recorded to etcd, causing the deletion operation to fail | sig/api-machinery,triage/accepted | low | Minor |
2,603,176,970 | flutter | [Android] : IntlBackslash is not recognized using ISO (102/105) layout keyboard | ### Steps to reproduce
https://en.wikipedia.org/wiki/Keyboard_layout

1. Prepare a keyboard,102/105 ISO
1. Run the sample code on Android
2. Press `IntlBackslash`, between left Shift and the letter key, the left orange key in the above picture
### Expected results
The pressed key event should be `static const PhysicalKeyboardKey intlBackslash = PhysicalKeyboardKey(0x00070064);`
https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/services/keyboard_key.g.dart#L4350
### Actual results

I'm not sure if this is a bug. But for 102/105 ISO keyboard, there're keys (orange) generate the same `Backslash` key event.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
final List<String> _lastKeyEvents = [];
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'Press any key:',
style: TextStyle(fontSize: 20),
),
const SizedBox(height: 20),
for (final keyEvent in _lastKeyEvents)
Text(
keyEvent,
style: const TextStyle(fontSize: 16),
),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
setState(() {
_lastKeyEvents.add(event.toString());
if (_lastKeyEvents.length > 10) {
_lastKeyEvents.removeAt(0);
}
});
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on Ubuntu 22.04.5 LTS 6.8.0-47-generic, locale en_US.UTF-8)
• Flutter version 3.24.3 on channel stable at /home/username/workspace/devenv/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (6 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /home/username/Android/Sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /opt/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 14.0.0-1ubuntu1.1
• cmake version 3.30.3
• ninja version 1.10.1
• pkg-config version 0.29.2
[✓] Android Studio (version 2024.1)
• Android Studio at /opt/android-studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.94.2)
• VS Code at /usr/share/code
• Flutter extension version 3.98.0
[!] Proxy Configuration
• HTTP_PROXY is set
! NO_PROXY is not set
[✓] Connected device (2 available)
• Linux (desktop) • linux • linux-x64 • Ubuntu 22.04.5 LTS 6.8.0-47-generic
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.100
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| a: text input,platform-android,platform-linux,P2,team-text-input,triaged-text-input | low | Critical |
2,603,206,470 | rust | headings in struct field documentation has confusing spacing | example:
```rust
struct Foo {
/// blah blah blah
///
/// # Heading
///
/// blah blah blah
a: u8,
b: u16,
}
```
there will be a large upper margin on the heading, making it visually unclear what the second paragraph corresponds to. | T-rustdoc,A-rustdoc-ui,T-rustdoc-frontend | low | Minor |
2,603,221,713 | ui | [bug]: Sidebar collapsible icon with size "lg" is not configured correctly | ### Describe the bug
When sidebar is in icon mode with lg sized sidebar buttons, something is not it right with the width and padding, the labels show a bit.
I also dont like that the icon mode has a fixed width, should be more dynamic than this.
### Affected component/components
Sidebar
### How to reproduce
add `size="lg"` prop to sidebar buttons
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Sidebar block 07
Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.