id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,645,431,222 | PowerToys | Add setting to expand menus or display tools in non-menu format | ### Description of the new feature / enhancement
Please add a configuration option similar to that found in the Azure console. A setting option under Appearance & behavior to have the menus expanded on startup or to display them in alphabetical order would be helpful. Honestly, the menu system makes things more confusing than looking for them in a flat list. An option to display smaller items than the default would be a more helpful option than sub-menus.
### Scenario when this would be used?
This would be helpful every time I open the settings console. As a "power user" I'm very used to looking for things by name and don't need a menu to get in the way. If you're really entrenched in having a menu system, why not let us power users decide how we want to arrange our tools. To me, the items under "Advanced" are no more advanced than many of the other tools. Having them in a submenu just makes them harder to find and uses more time.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,645,446,316 | godot | Joint3D, connecting to Physical Bones doesn't respect Skeleton modifications/animations when initializing offsets | ### Tested versions
Tested in Godot 4.3.stable
### System information
Linux Mint 22, i9-9900KF, RTX 3080-ti Driver: 550.120
### Issue description
All Joint3D nodes fail to set the correct initial transforms for node_a and node_b if either of those nodes are PhysicalBones that have their positions modified by Skeleton Modifications before the joint is created. For example, SkeletonIK3D, and skeletal animations. If the animation is set directly in the inspector as the scene's starting animation the joint will be created correctly, but if the animation is changed after the start of the scene (yet before the joint3D's creation), the joint will operate as if the bones were still in the scene's initial state. I discovered this when trying to join an object to a PhysicalBone3D that was part of a SkeletonIK3D chain.
### Steps to reproduce
Add any scene that contains a Skeleton3D.
Create Physical Bones on Skeleton.
Create a physics body to connect to phys bone to.
In Code, play an animation, and afterwards create a Joint3D, and configure it to connect to the physics body and a Physical Bone.
Simulate PhysicalBones. The joint will not respect the animation state, unless the model was already in that state in the inspector before playing. (Bug also occurs with SkeletonIK3D)
### Minimal reproduction project (MRP)
[joint3dwithanimation.zip](https://github.com/user-attachments/files/17685402/joint3dwithanimation.zip)
| bug,topic:physics,topic:animation | low | Critical |
2,645,462,620 | next.js | use cache doesn't work properly with dynamic routes | ### Link to the code that reproduces this issue
https://github.com/cantemizyurek/next-js-dynamic-io-bug-report
### To Reproduce
Create a dynamic route [id] and put this code inside and try to build application. Throws error I think it should not be throwing.
And works as expected in dev mode.
```tsx
'use cache'
import { DisplayId } from './components'
import { Suspense } from 'react'
export default async function Page({
...props
}: {
params: Promise<{ id: string }>
}) {
return (
<Suspense fallback={<div>Loading...</div>}>
<DisplayId params={props.params} />
</Suspense>
)
}
```
### Current vs. Expected behavior
Not to throw errors when building
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 22:05:53 PDT 2024; root:xnu-11215.41.3~5/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.6.0
Relevant Packages:
next: 15.0.4-canary.3 // Latest available version is detected (15.0.4-canary.3).
eslint-config-next: 15.0.4-canary.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO
### Which stage(s) are affected? (Select all that apply)
next build (local), Vercel (Deployed)
### Additional context
_No response_ | bug,linear: next,dynamicIO | low | Critical |
2,645,468,835 | PowerToys | Create a FancyZones utility command-line that supports locating and resize an app .exe into a selected zone number. | ### Description of the new feature / enhancement
It would be very nice to be able to start an app .exe under the control of a FancyZone command-line to locate and resize an app into a selected zone number.
### Scenario when this would be used?
Adding to a Cmd script or Powershell script to open a folder using explorer.exe or other .exe even like using to start a cmd.exe, powershell.exe forcing them into a selected zone number window location and its size.
### Supporting information
Doesn't look like anyone else supports a fully functional solution. Nircnd fails to resize explorer windows. | Needs-Triage | low | Minor |
2,645,496,603 | pytorch | Get `aot_autograd`'ed graph without `torch.compile` and freeze constants without Inductor context | ### ๐ The feature, motivation and pitch
I am trying to implement eager mode of PT2E quantization on CPU. Currently, the PT2E quantization on CPU is lowered to Inductor by `torch.compile`. The current work flow is like:
- Quantize the model with `Quantizer`, `prepare_pt2e` and `convert_pt2e`. Now the model graph contains `quantize` and `dequantize`.
- Call `torch.compile` so that patterns like `dequantize - float32_op` and more other patterns are fused and lowered in Inductor.
- Constants are also folded in Inductor
For eager mode, the work flow is designed as
- Quantize the model with `Quantizer`, `prepare_pt2e` and `convert_pt2e`. Now the model graph contains `quantize` and `dequantize`. (same as the Inductor mode)
- Get `aot_autograd`'ed graph without `torch.compile`
- Apply fusion passes in Inductor on the graph manually
- Apply constant folding (freezing) passes in Inductor manually
The problems I met are:
1. I cannot run `aot_autograd` on an ordinary graph module directly. The error is `RuntimeError: Graph output must be a (). This is so that we can avoid pytree processing of the outputs.`. See example code 1.
2. Even though we get the aot_autograd graph somehow, we are not able to do constant folding because all constant params become arguments of forward and the freezing path does not know which are constants. In Inductor, there is a context to keep track of the constants. But I don't have that context if I don't call `torch.compile`. See example code 2.
Is there a way to solve the problems without big changes in the PyTorch code? Thanks.
Example code 1:
```python
import torch
import torchvision
import copy
from torch._export import capture_pre_autograd_graph
from torch.ao.quantization.quantize_pt2e import prepare_pt2e, convert_pt2e
import torch.ao.quantization.quantizer.x86_inductor_quantizer as xiq
from torch.ao.quantization.quantizer.x86_inductor_quantizer import X86InductorQuantizer
from torch._dynamo.backends.common import aot_autograd
def my_compiler(gm, example_inputs):
print(gm)
return gm
def pt2e_ptq(m, example_inputs):
m = m.eval()
exported_model = torch.export.export_for_training(m, example_inputs).module()
quantizer = X86InductorQuantizer()
quantizer.set_global(xiq.get_default_x86_inductor_quantization_config())
prepared_model = prepare_pt2e(exported_model, quantizer)
_ = prepared_model(*example_inputs)
converted_model = convert_pt2e(prepared_model)
torch.ao.quantization.move_exported_model_to_eval(converted_model)
with torch.no_grad():
my_backend = aot_autograd(fw_compiler=my_compiler)
optimized_model = my_backend(converted_model, example_inputs) # !!! error here !!!
optimized_model(*example_inputs)
if __name__ == "__main__":
data = torch.randn(16, 3, 224, 224)
model_fp = torchvision.models.resnet18(weights=torchvision.models.ResNet18_Weights.DEFAULT)
pt2e_ptq(copy.deepcopy(model_fp), (data,))
```
Example code 2:
```python
import torch
import torchvision
import copy
from torch._export import capture_pre_autograd_graph
from torch.ao.quantization.quantize_pt2e import prepare_pt2e, convert_pt2e
import torch.ao.quantization.quantizer.x86_inductor_quantizer as xiq
from torch.ao.quantization.quantizer.x86_inductor_quantizer import X86InductorQuantizer
from torch._dynamo.backends.common import aot_autograd
from functorch.compile import make_boxed_func
from torch._inductor.fx_passes.freezing_patterns import freezing_passes
def my_compiler(gm, example_inputs):
freezing_passes(gm, example_inputs) # !!! freezing passes are applied but constant_fold passes are not !!!
print(gm)
return make_boxed_func(gm.forward)
my_backend = aot_autograd(fw_compiler=my_compiler)
def pt2e_ptq(m, example_inputs):
m = m.eval()
exported_model = torch.export.export_for_training(m, example_inputs).module()
quantizer = X86InductorQuantizer()
quantizer.set_global(xiq.get_default_x86_inductor_quantization_config())
prepared_model = prepare_pt2e(exported_model, quantizer)
_ = prepared_model(*example_inputs)
converted_model = convert_pt2e(prepared_model)
torch.ao.quantization.move_exported_model_to_eval(converted_model)
with torch.no_grad():
optimized_model = torch.compile(converted_model, backend=my_backend)
_ = optimized_model(*example_inputs)
if __name__ == "__main__":
data = torch.randn(16, 3, 224, 224)
model_fp = torchvision.models.resnet18(weights=torchvision.models.ResNet18_Weights.DEFAULT)
pt2e_ptq(copy.deepcopy(model_fp), (data,))
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | triaged,oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,645,506,727 | rust | Hang: type_alias_impl_trait | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```shell
rustc file.rs
```
```rust
#![feature(type_alias_impl_trait)]
trait Id {
type Assoc;
}
impl<T> *const u8 {
type Assoc = T;
}
type Ty
where
Ty: Id<Assoc = Ty>,
= impl Sized;
fn define() -> Ty {}
fn main() {}
```
This file causes rustc to hang and is derived from mutant #109387. #109387 can terminate successfully.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (b91a3a056 2024-11-07)
binary: rustc
commit-hash: b91a3a05609a46f73d23e0995ae7ebb4a4f429a5
commit-date: 2024-11-07
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
<details><summary>Compiler Output (before the hang)</summary>
```
error[E0658]: inherent associated types are unstable
--> file.rs:6:5
|
6 | type Assoc = T;
| ^^^^^^^^^^^^^^^
|
= note: see issue #8995 <https://github.com/rust-lang/rust/issues/8995> for more information
= help: add `#![feature(inherent_associated_types)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-04; consider upgrading it if it is out of date
error[E0207]: the type parameter `T` is not constrained by the impl trait, self type, or predicates
--> file.rs:5:6
|
5 | impl<T> *const u8 {
| ^ unconstrained type parameter
error[E0277]: the trait bound `Ty: Id` is not satisfied
--> file.rs:8:1
|
8 | type Ty
| ^^^^^^^ the trait `Id` is not implemented for `Ty`
|
help: this trait has no implementations, consider adding one
--> file.rs:2:1
|
2 | trait Id {
| ^^^^^^^^
note: required by a bound on the type alias `Ty`
--> file.rs:10:9
|
10 | Ty: Id<Assoc = Ty>,
| ^^^^^^^^^^^^^^ required by this bound
error[E0277]: the trait bound `Ty: Id` is not satisfied
--> file.rs:10:5
|
10 | Ty: Id<Assoc = Ty>,
| ^^^^^^^^^^^^^^^^^^ the trait `Id` is not implemented for `Ty`
|
help: this trait has no implementations, consider adding one
--> file.rs:2:1
|
2 | trait Id {
| ^^^^^^^^
= help: see issue #48214
help: add `#![feature(trivial_bounds)]` to the crate attributes to enable
|
2 + #![feature(trivial_bounds)]
|
error[E0277]: the trait bound `Ty: Id` is not satisfied
--> file.rs:11:3
|
11 | = impl Sized;
| ^^^^^^^^^^ the trait `Id` is not implemented for `Ty`
|
help: this trait has no implementations, consider adding one
--> file.rs:2:1
|
2 | trait Id {
| ^^^^^^^^
note: required by a bound on the type alias `Ty`
--> file.rs:10:9
|
10 | Ty: Id<Assoc = Ty>,
| ^^^^^^^^^^^^^^ required by this bound
error[E0277]: the trait bound `Ty: Id` is not satisfied
--> file.rs:12:16
|
12 | fn define() -> Ty {}
| ^^ the trait `Id` is not implemented for `Ty`
|
help: this trait has no implementations, consider adding one
--> file.rs:2:1
|
2 | trait Id {
| ^^^^^^^^
note: required by a bound on the type alias `Ty`
--> file.rs:10:9
|
10 | Ty: Id<Assoc = Ty>,
| ^^^^^^^^^^^^^^ required by this bound
error[E0277]: the trait bound `Ty: Id` is not satisfied
--> file.rs:12:19
|
12 | fn define() -> Ty {}
| ^^ the trait `Id` is not implemented for `Ty`
|
help: this trait has no implementations, consider adding one
--> file.rs:2:1
|
2 | trait Id {
| ^^^^^^^^
note: required by a bound on the type alias `Ty`
--> file.rs:10:9
|
10 | Ty: Id<Assoc = Ty>,
| ^^^^^^^^^^^^^^ required by this bound
error[E0390]: cannot define inherent `impl` for primitive types
--> file.rs:5:1
|
5 | impl<T> *const u8 {
| ^^^^^^^^^^^^^^^^^
|
= help: consider using an extension trait instead
```
</details>
| T-compiler,I-compilemem,C-bug,I-hang,F-type_alias_impl_trait | low | Critical |
2,645,521,462 | PowerToys | in powertoys-run, windows search plugin, the order of the results is not consistent with system-wide search | ### Description of the new feature / enhancement
I'm trying to customize the order of the results obtained by windows search plugin. For instance in Everything (file search engine) I have an option for that. In powertoys-run windows-search plugin I can't. After scratching my head, I thought that there is a global windows-search result-order configuration, quite probably backed by windows registry. The `WinSetView` tool makes this manifest. I can set my preferred order there for views of search results and it works. For instance when I search using windows-10, windows-explorer standard search bar, the configured order is honored and respected. But in powertoys-run it is not. Maybe the order is hard-coded. It would be very desirable that both orders were coherent (windows-explorer search bar and powertoys-run). Or at least to have a configuration option, if really a hard-code sorting is what is now used. Thanks for your time!

### Scenario when this would be used?
I have a huge library of documents and need to take control of the order of windows search results.
### Supporting information
See attached screenshot at description. There it shows that a file I wanted to open had something to do with the word "twisted" in the name, and windows search, configured to sort using latest-access dates leads me there with laser like accuracy, but Powertoy-run makes me scroll to the last place out of view. | Needs-Triage | low | Minor |
2,645,536,519 | ollama | How to configure Ollama in Termux to use the local GPU and CPU acceleration model calculation? | How to configure Ollama in Termux to use the local GPU and CPU acceleration model calculation? | feature request | low | Minor |
2,645,572,240 | yt-dlp | Support more audio formats in EmbedThumbnail postprocessor | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Currently the EmbedThumbnail postprocessor only supports `mp3, mkv/mka, ogg/opus/flac, m4a/mp4/m4v/mov` formats. Since mutagen is already a dependency, and mutagen can add cover art to WAVE and AIFF files in the form of ID3 tags (it's not part of the WAVE standard, but many audio players will recognize these tags without issue), it would be nice if the EmbedThumbnail postprocessor also worked on these formats.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement,triage,core:post-processor | low | Critical |
2,645,588,581 | PowerToys | Chinese translation error - "ๅฏ็ " in the Welcome page should be "้็ง่ฎพ็ฝฎ" | ### Microsoft PowerToys version
0.86.0
### Utility with translation issue
Welcome / PowerToys Tour window
### ๐ Language affected
Chinese(zh-cn)
### โ Actual phrase(s)


## Error text:
โๆ่ฐขไฝ ๅธฎๅฉๆน่ฟ PowerToys! ไฝ ๅฏไปฅ้ๆถไป ๆดๆนๆญค้กน ๅฏ็ ใโ
### โ๏ธ Expected phrase(s)
## Suggested text:
โๆ่ฐขไฝ ๅธฎๅฉๆน่ฟ PowerToys! ไฝ ๅฏไปฅ้ๆถไป ้็ง่ฎพ็ฝฎ ๆดๆนๆญค้กนใโ
### โน Why is the current translation wrong
The current translation is incorrect because the word "ๅฏ็ " (password) does not match the intended meaning of "Privacy" in this context. The phrase "ไฝ ๅฏไปฅ้ๆถไป ๆดๆนๆญค้กน ๅฏ็ ใ" implies changing a password, which is misleading for users.
Correct translation suggestion:
Replace "ๅฏ็ " with "้็ง่ฎพ็ฝฎ" (Privacy settings) to accurately convey that this option relates to privacy settings, not a password. | Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation | low | Critical |
2,645,637,071 | deno | `Temporal.Duration.toLocaleString()` does not return a proper human-readable string | Version: Deno 2.0.5
Expected:
```
> Temporal.Duration.from('P1DT6H30M').toLocaleString()
"1 day 6 hours 30 minutes"
```
Actual:
```
> Temporal.Duration.from('P1DT6H30M').toLocaleString()
"P1DT6H30M"
```
See also:
<https://tc39.es/proposal-temporal/docs/duration.html#toLocaleString> | bug,upstream,temporal | low | Minor |
2,645,667,781 | ui | [bug]: Incident bug validateDOMNesting(...): <button> cannot appear as a descendant of <button> | ### Describe the bug
I creating tree compoent with checkbox like the below image

That component reference: https://github.com/shadcn-ui/ui/issues/355#issuecomment-1703767574
I'm notice if I put the **Checkbox** inside **Tree item** then I get error: validateDOMNesting(...): <button> cannot appear as a descendant of <button> because the base implement Checkbox base on the Button that cause error.
```
<div className='flex items-center gap-2'>
<Checkbox />
<span className='text-sm truncate'>{item.name}</span>
</div>
```
### Affected component/components
CheckBox
### How to reproduce
1. Implement component : https://github.com/shadcn-ui/ui/issues/355#issuecomment-1703767574
2. Add check box inside TreeItem
3. Render Tree component
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Warning: validateDOMNesting(...): <button> cannot appear as a descendant of
```
### System Info
```bash
- React 18
- Chrome Version 130.0.6723.117 (Official Build) (64-bit)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,645,766,128 | vscode | can't select text from markdown cells in preview | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Make text selectable in markdown cells in jupyter notebook.
Sometimes you can found code examples in markdown cells, and when you want to copy that, you need to start editing cell, and search for that part. | feature-request,notebook-markdown | low | Minor |
2,645,772,072 | godot | Dynamic `GDScript` cannot resolve `class_name` in script | ### Tested versions
- Reproducible in Godot v4.3.stable
### System information
Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6109) - Intel(R) Core(TM) i7-10700KF CPU @ 3.80GHz (16 Threads)
### Issue description
When using the `GDScript` class to dynamically create a script, the `class_name` fails to resolve within the same script and results in the following error:
> Parser Error: Identifier not found: Foo
Attempting to save the script to a file and loading it using `load()` at run-time also doesn't work. Loading the same script statically when created in the editor works.
This is an issue when developing a code generator for Godot as testing it is impossible without dynamically creating classes. It is also a problem when creating GDScript-based mods.
### Steps to reproduce
```gdscript
func _ready() -> void:
var gdscript := GDScript.new()
gdscript.source_code = "class_name Foo extends Resource
static func get_foo() -> Foo:
return Foo.new()
"
gdscript.reload()
var foo : Variant = gdscript.new()
print(foo)
```
### Minimal reproduction project (MRP)
[project.zip](https://github.com/user-attachments/files/17686435/project.zip) | bug,discussion,topic:core,topic:gdscript | low | Critical |
2,645,773,488 | stable-diffusion-webui | [Bug]: Couldnt install Torch | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
Runtime error: couldn't install Torch
### Steps to reproduce the problem
1. Run webui.bat
### What should have happened?
Torch should have downloaded
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
I do not know how to use --dump-sysinfo commandline argument to generate the file.
### Console logs
```Shell
'"D:\FC Diff\stable-diffusion-webui\venv\Scripts\activate.bat"' is not recognized as an internal or external command,
operable program or batch file.
venv "D:\FC Diff\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
D:\FC Diff\stable-diffusion-webui\venv\Scripts\python.exe: No module named pip
Traceback (most recent call last):
File "D:\FC Diff\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "D:\FC Diff\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "D:\FC Diff\stable-diffusion-webui\modules\launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "D:\FC Diff\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "D:\FC Diff\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
Press any key to continue . . .
```
### Additional information
I was trying with python 3.10.6 but read that switching to 3.10.10 resolved the issue. I have tried both pythons and get the same error.
Other solutions have been to delete the Venv folder and rerun webui.bat. when I attempt this I get an error saying webui cannot create a venv folder.
Note: I have a AMD Ryzen 7 5700G with Radeon Graphics | bug-report | low | Critical |
2,645,786,568 | yt-dlp | Tele5, not mediathek | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Germany
### Example URLs
https://tele5.de/the-cold-light-of-day
### Provide a description that is worded well enough to be understood
If the link will be https://tele5.de/mediathek/star-trek-deep-space-nine/tosk-der-gejagte, download will be possible, but if "mediathek" will be miss like as given, download will be no possiblle
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-o', '%(title)s.%(ext)s', '--cookies', 'cookies.txt', '--no-playlist', '-N', '8', 'https://tele5.de/the-cold-light-of-day']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-48-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: brotli-1.1.0, certifi-2023.11.17, requests-2.31.0, sqlite3-3.45.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.11.04/SHA2-256SUMS
Current version: stable@2024.10.22 from yt-dlp/yt-dlp
Latest version: stable@2024.11.04 from yt-dlp/yt-dlp
Current Build Hash: a87033b5686c5512adcd26023c6ac4a56957d736f4653033e36901dbcc8bf7bb
Updating to stable@2024.11.04 from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp from https://github.com/yt-dlp/yt-dlp/releases/download/2024.11.04/yt-dlp
Updated yt-dlp to stable@2024.11.04 from yt-dlp/yt-dlp
[debug] Restarting: python3 /home/richard/.local/bin/yt-dlp -vU -o '%(title)s.%(ext)s' --cookies cookies.txt --no-playlist -N 8 https://tele5.de/the-cold-light-of-day
[debug] Command-line config: ['-vU', '-o', '%(title)s.%(ext)s', '--cookies', 'cookies.txt', '--no-playlist', '-N', '8', 'https://tele5.de/the-cold-light-of-day']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.11.04 from yt-dlp/yt-dlp [197d0b03b] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-48-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: brotli-1.1.0, certifi-2023.11.17, requests-2.31.0, sqlite3-3.45.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.11.04 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.11.04 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://tele5.de/the-cold-light-of-day
[generic] the-cold-light-of-day: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] the-cold-light-of-day: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://tele5.de/the-cold-light-of-day
Traceback (most recent call last):
File "/home/richard/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/richard/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1760, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/home/richard/.local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/richard/.local/bin/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://tele5.de/the-cold-light-of-day
```
| site-request,triage | low | Critical |
2,645,786,962 | opencv | build opencv from source fail in imgcodecs module | ### System Information
OpenCV version: branch 4.x, 4.10.0-372-g3fddea2ade
OS: macos 15.1
Compiler: Apple clang version 16.0.0 (clang-1600.0.26.4) Target: x86_64-apple-darwin24.1.0
### Detailed description
Compile fail:
```
[ 82%] Building CXX object modules/imgcodecs/CMakeFiles/opencv_imgcodecs.dir/src/apple_conversions.mm.o
In file included from /Users/zhaozg/work/extra/opencv/modules/imgcodecs/src/apple_conversions.mm:5:
In file included from /Users/zhaozg/work/extra/opencv/modules/imgcodecs/src/apple_conversions.h:6:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:28:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vImage.framework/Headers/vImage.h:296:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vImage.framework/Headers/vImage_CVUtilities.h:48:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVPixelBuffer.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVImageBuffer.h:29:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/ApplicationServices.framework/Headers/ApplicationServices.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Headers/CoreServices.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/AE.framework/Headers/AE.h:20:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:208:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/HFSVolumes.h:25:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/hfs/hfs_format.h:807:2: error: unknown type name 'uuid_string_t'; did you mean 'io_string_t'?
807 | uuid_string_t ext_jnl_uuid;
| ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/device/device_types.h:89:33: note: 'io_string_t' declared here
89 | typedef char io_string_t[512];
| ^
In file included from /Users/zhaozg/work/extra/opencv/modules/imgcodecs/src/apple_conversions.mm:5:
In file included from /Users/zhaozg/work/extra/opencv/modules/imgcodecs/src/apple_conversions.h:6:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:28:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vImage.framework/Headers/vImage.h:296:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vImage.framework/Headers/vImage_CVUtilities.h:48:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVPixelBuffer.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVImageBuffer.h:29:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/ApplicationServices.framework/Headers/ApplicationServices.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Headers/CoreServices.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/AE.framework/Headers/AE.h:20:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:208:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/HFSVolumes.h:25:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/hfs/hfs_format.h:809:20: error: unknown type name 'uuid_string_t'; did you mean 'io_string_t'?
809 | char reserved[JIB_RESERVED_SIZE];
| ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/hfs/hfs_format.h:800:61: note: expanded from macro 'JIB_RESERVED_SIZE'
800 | #define JIB_RESERVED_SIZE ((32*sizeof(u_int32_t)) - sizeof(uuid_string_t) - 48)
| ^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/device/device_types.h:89:33: note: 'io_string_t' declared here
89 | typedef char io_string_t[512];
| ^
In file included from /Users/zhaozg/work/extra/opencv/modules/imgcodecs/src/apple_conversions.mm:5:
In file included from /Users/zhaozg/work/extra/opencv/modules/imgcodecs/src/apple_conversions.h:6:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:28:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vImage.framework/Headers/vImage.h:296:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Frameworks/vImage.framework/Headers/vImage_CVUtilities.h:48:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVPixelBuffer.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVImageBuffer.h:29:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/ApplicationServices.framework/Headers/ApplicationServices.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Headers/CoreServices.h:23:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/AE.framework/Headers/AE.h:20:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:208:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/HFSVolumes.h:25:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/hfs/hfs_format.h:809:20: error: array is too large (18446744073709551184 elements)
809 | char reserved[JIB_RESERVED_SIZE];
| ^~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/hfs/hfs_format.h:800:28: note: expanded from macro 'JIB_RESERVED_SIZE'
800 | #define JIB_RESERVED_SIZE ((32*sizeof(u_int32_t)) - sizeof(uuid_string_t) - 48)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 errors generated.
make[2]: *** [modules/imgcodecs/CMakeFiles/opencv_imgcodecs.dir/src/apple_conversions.mm.o] Error 1
make[1]: *** [modules/imgcodecs/CMakeFiles/opencv_imgcodecs.dir/all] Error 2
make: *** [all] Error 2
โฏ
```
### Steps to reproduce
```sh
# active python3 venv
#
source .local/venv/bin/activate
mkdir -p build && cd build
#OPENCV_IMGCODECS="-DBUILD_opencv_imgcodecs=OFF -DWITH_IMGCODEC_HDR=OFF -DWITH_IMGCODEC_SUNRASTER=OFF -DWITH_IMGCODEC_PXM=OFF -DWITH_IMGCODEC_PFM=OFF"
cmake .. -DCMAKE_BUILD_TYPE=Release \
-DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF \
-DCMAKE_INSTALL_PREFIX=/usr/local/opt/opencv \
-DBUILD_SHARED_LIBS=ON \
-DBUILD_DOCS=OFF -DBUILD_EXAMPLES=OFF -DBUILD_PACKAGE=OFF \
-DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF \
-DBUILD_JAVA=OFF -DBUILD_OBJC=OFF -DBUILD_FAT_JAVA_LIB=OFF -DBUILD_KOTLIN_EXTENSIONS=OFF \
-DWITH_IPP=OFF $OPENCV_IMGCODECS
make
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [x] There is reproducer code and related data files (videos, images, onnx, etc) | bug,platform: ios/osx | low | Critical |
2,645,887,225 | vscode | Mac link server terminal: The window crashed unexpectedly (reason: "crashed", code: "5") | ็ชๅฃๆๅค็ปๆญข (ๅๅ :"crashed"๏ผไปฃ็ :"5")
The window crashed unexpectedly (reason: "crashed", code: "5") | info-needed | medium | Critical |
2,645,892,579 | rust | Official binaries for `wasm32-unknown-unknown` (and potentially other WASM platforms?) contain code for the wrong architecture | The `compiler-builtins` crate compiles C code for WASM platforms since https://github.com/rust-lang/compiler-builtins/pull/566. This works if the C compiler is Clang, as it passes the appropriate `-target`. However, in a GCC build environment this means that the `cc` crate will end up silently compiling code for the wrong architecture entirely. This means that, for example, the `compiler-builtins` shipping for `wasm32-unknown-unknown` via Rustup contains object files like the following:
```
45c91108d938afe8-cmpti2.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), with debug_info, not stripped
```
I suppose that the build for these needs to arrange for Clang to be present, or perhaps even specified explicitly in the `target.*.{cc,cxx,linker}` settings.
(Was redirected here from https://github.com/rust-lang/compiler-builtins/issues/732.)
| T-bootstrap,O-wasm,C-bug | low | Critical |
2,645,933,547 | rust | Incorrect expected type for associated type with `specialization` | The following code:
```rust
#![feature(specialization)]
#[derive(Clone, Copy, Debug)]
struct Cons<Item, Tail>(pub Item, pub Tail);
#[derive(Clone, Copy, Debug)]
struct Nil;
trait GetInternal<Item> {
type Item;
fn get(self) -> Self::Item;
}
impl<Item, Other, Tail> GetInternal<Item> for Cons<Other, Tail>
where
Tail: GetInternal<Item>,
{
default type Item = Tail::Item;
fn get(self) -> Self::Item {
self.1.get()
}
}
```
Results in the following error:
```
Compiling playground v0.0.1 (/playground)
warning: the feature `specialization` is incomplete and may not be safe to use and/or cause compiler crashes
--> src/lib.rs:1:12
|
1 | #![feature(specialization)]
| ^^^^^^^^^^^^^^
|
= note: see issue #31844 <https://github.com/rust-lang/rust/issues/31844> for more information
= help: consider using `min_specialization` instead, which is more stable and complete
= note: `#[warn(incomplete_features)]` on by default
error[E0308]: mismatched types
--> src/lib.rs:22:9
|
15 | impl<Item, Other, Tail> GetInternal<Item> for Cons<Other, Tail>
| ---- found this type parameter
...
21 | fn get(self) -> Self::Item {
| ---------- expected `<Cons<Other, Tail> as GetInternal<Item>>::Item` because of return type
22 | self.1.get()
| ^^^^^^^^^^^^ expected `Cons<Other, Tail>`, found type parameter `Tail`
|
= note: expected associated type `<Cons<Other, Tail> as GetInternal<Item>>::Item`
found associated type `<Tail as GetInternal<Item>>::Item`
= note: an associated type was expected, but a different one was found
For more information about this error, try `rustc --explain E0308`.
warning: `playground` (lib) generated 1 warning
error: could not compile `playground` (lib) due to 1 previous error; 1 warning emitted
```
However, the expected associated type should be `<Tail as GetInternal<Item>>::Item`, not `<Cons<Other, Tail> as GetInternal<Item>>::Item`. The error disappears if specialization is removed.
The error occurs with `min_specialization` as well, but `min_specialization` also gives an error about trying to specialize an associated type, so I assume `min_specialization` is not supposed to support specializing associated types.
P.S. This example was pared down from a larger example, so the reason for specialization is no longer present.
### Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (59cec72a5 2024-11-08)
binary: rustc
commit-hash: 59cec72a57af178767a7b8e7f624b06cc50f1087
commit-date: 2024-11-08
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
``` | A-specialization,C-discussion | low | Critical |
2,645,936,885 | puppeteer | [Feature]: Map chrome extension frame into puppeeter frame | ### Feature description
I don't think there is an easy way to find puppeter frame by chrome extension frame. Example:
const currentChromeFrame = await chrome.webNavigation.getFrame({ tabId, frameId });
const puppeterFrame = page.frames().find((x) => x.url() === currentChromeFrame.url);
This is an example to find it by url, but what if we have multiple iframes with the same url?
Is there a way to map it by id? FrameId in chrome frame is a number in the format of 1234, whereas puppeeter is a string. | feature,P3 | low | Minor |
2,645,938,628 | godot | tooltip changes position of control nodes in canvas layer | ### Tested versions
reproducible in:
- 4.4Dev-4,
- 4.4Dev-5,
- 4.4Dev-6,
- 4.4Dev-7
- 4.4Beta-1
not reproducible in
- 4.4.Dev-3
### System information
Linux 6.10.11-2-MANJARO #1 SMP PREEMPT_DYNAMIC Thu Sep 26 03:28:26 UTC 2024 x86_64 GNU/Linux
### Issue description
when tool tip shows it changes position of all control nodes to the top left
but only they are children of CanvasLayer
it should only show tool tip and leave the position of control nodes alone
### Steps to reproduce
just run project hover mouse over the "tab window"
and it'll change position of window
### Minimal reproduction project (MRP)
[mrp-tooltip.zip](https://github.com/user-attachments/files/17686932/mrp-tooltip.zip)
Edit:
Reproducible in 4.4Dev-5
Reproducible in 4.4Dev-6
Reproducible in 4.4Dev-7
Reproducible in 4.4Beta-1 | bug,topic:gui | low | Minor |
2,645,938,631 | PowerToys | Bug with arc run | ### Microsoft PowerToys version
0.86.0
### Installation method
WinGet
### Running as admin
None
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Was just using powertoys run when came across an error. 'Something went wrong' Logs:[2024-11-09.txt](https://github.com/user-attachments/files/17686935/2024-11-09.txt)
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,645,989,294 | tensorflow | Bug: `Compiling upb/upb.c failed` due to `Clang` Version Mismatch in Tensorflow's Docker Build Image | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.18
### Custom code
No
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS
### Mobile device
_No response_
### Python version
3.9
### Bazel version
6.5.0
### GCC/compiler version
11.4.0
### CUDA/cuDNN version
CUDA 12.5.1 / cuDNN 9.3.0
### GPU model and memory
NVIDIA RTX 3090 24GB DDR6
### Current behavior?
I expected the `tensorflow-gpu` wheel to compile successfully. However I received the error detailed below. I believe it's because the required `Clang` version (`18.1.8`) installed in the docker image is incorrect. [According to the documentation](https://www.tensorflow.org/install/source#gpu), the required version of `Clang` is `17.0.6` for `tf2.18`. I pulled [`tensorflow/build:2.18-python3.9`](https://hub.docker.com/layers/tensorflow/build/2.18-python3.9/images/sha256-843f48fe24727cdef4d76ae2724edc8385b2b2b44e3e4da0752e84b9ca142a81?context=explore) from DockerHub.
Do I need to manually roll back to `Clang 17.0.6` within the container, or should I be pulling a different image?
```bash
ERROR: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/upb/BUILD:57:11: Compiling upb/upb.c failed: (Exit 1): clang failed: error executing command (from target @upb//:upb) /usr/lib/llvm-18/bin/clang -MD -MF bazel-out/k8-opt/bin/external/upb/_objs/upb/upb.pic.d '-frandom-seed=bazel-out/k8-opt/bin/external/upb/_objs/upb/upb.pic.o' '-DBAZEL_CURRENT_REPOSITORY="upb"' -iquote ... (remaining 44 arguments skipped)
external/upb/upb/upb.c:192:10: error: defining a type within 'offsetof' is a Clang extension [-Werror,-Wgnu-offsetof-extensions]
192 | n &= ~(upb_alignof(upb_arena) - 1);
| ^~~~~~~~~~~~~~~~~~~~~~
external/upb/upb/upb.c:183:37: note: expanded from macro 'upb_alignof'
183 | #define upb_alignof(type) offsetof (struct { char c; type member; }, member)
| ^~~~~~
/usr/lib/llvm-18/lib/clang/18/include/__stddef_offsetof.h:16:43: note: expanded from macro 'offsetof'
16 | #define offsetof(t, d) __builtin_offsetof(t, d)
| ^
1 error generated.
Target //tensorflow/tools/pip_package:wheel failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 524.930s, Critical Path: 5.67s
INFO: 1437 processes: 820 internal, 617 local.
FAILED: Build did NOT complete successfully
```
### Standalone code to reproduce the issue
```shell
# Pull the docker image to build tf.
docker pull tensorflow/build:2.18-python3.9
# Pull the tensorflow repo.
git pull https://github.com/tensorflow/tensorflow.git
# Change to the tensorflow directory.
cd tensorflow
# Checkout and switch to the r2.18 branch.
git fetch origin && git checkout -b r2.18 origin/r2.18
# Run a container to build tensorflow.
docker run \
-it \
--name tf-build \
-h tf-build \
-u root \
-e HOST_PERMS="$(id -u):$(id -g)" \
--runtime=nvidia \
--gpus=all \
--rm \
--shm-size=2g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
-v $PWD:/mnt \
-w /mnt \
tensorflow/build:2.18-python3.9 \
bash
# From within the container, configure the build.
./configure
# Compile the wheel.
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda --config=cuda_wheel --config=opt
```
### Relevant log output
```shell
tf-docker /mnt > ./configure
You have bazel 6.5.0 installed.
Please specify the location of python. [Default is /usr/bin/python3]:
Found possible Python library paths:
/usr/lib/python3/dist-packages
/usr/local/lib/python3.9/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3/dist-packages]
Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Please specify the hermetic CUDA version you want to use or leave empty to use the default version. 12.5.1
Please specify the hermetic cuDNN version you want to use or leave empty to use the default version. 9.3.0
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as "x.y" or "compute_xy" to include both virtual and binary GPU code, or as "sm_xy" to only include the binary code.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]: 8.6
Please specify the local CUDA path you want to use or leave empty to use the default version.
Please specify the local CUDNN path you want to use or leave empty to use the default version.
Please specify the local NCCL path you want to use or leave empty to use the default version.
Do you want to use clang as CUDA compiler? [Y/n]: y
Clang will be used as CUDA compiler.
Please specify clang path that to be used as host compiler. [Default is /usr/lib/llvm-18/bin/clang]:
You have Clang 18.1.8 installed.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: -march=native
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=mkl_aarch64 # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
--config=monolithic # Config for mostly static monolithic build.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v1 # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
--config=nogcp # Disable GCP support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
tf-docker /mnt > bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda --config=cuda_wheel --config=opt
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
WARNING: The following configs were expanded more than once: [cuda_clang, cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
INFO: Reading 'startup' options from /mnt/.bazelrc: --windows_enable_symlinks
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=173
INFO: Reading rc options for 'build' from /mnt/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /etc/bazel.bazelrc:
'build' options: --action_env=DOCKER_CACHEBUSTER=1726964088092166976 --host_action_env=DOCKER_HOST_CACHEBUSTER=1726964088140283903
INFO: Reading rc options for 'build' from /mnt/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for 'build' from /mnt/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --action_env LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 --config=cuda_clang --action_env CLANG_CUDA_COMPILER_PATH=/usr/lib/llvm-18/bin/clang --config=cuda_clang
INFO: Found applicable config definition build:short_logs in file /mnt/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /mnt/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda_clang in file /mnt/.bazelrc: --config=cuda --@local_config_cuda//:cuda_compiler=clang --copt=-Qunused-arguments --repo_env=HERMETIC_CUDA_COMPUTE_CAPABILITIES=sm_60,sm_70,sm_80,sm_89,compute_90 --host_linkopt=-fuse-ld=lld --host_linkopt=-lm --linkopt=-fuse-ld=lld --linkopt=-lm
INFO: Found applicable config definition build:cuda in file /mnt/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
INFO: Found applicable config definition build:cuda in file /mnt/.tf_configure.bazelrc: --repo_env HERMETIC_CUDA_VERSION=12.5.1 --repo_env HERMETIC_CUDNN_VERSION=9.3.0 --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES=8.6
INFO: Found applicable config definition build:cuda_clang in file /mnt/.bazelrc: --config=cuda --@local_config_cuda//:cuda_compiler=clang --copt=-Qunused-arguments --repo_env=HERMETIC_CUDA_COMPUTE_CAPABILITIES=sm_60,sm_70,sm_80,sm_89,compute_90 --host_linkopt=-fuse-ld=lld --host_linkopt=-lm --linkopt=-fuse-ld=lld --linkopt=-lm
INFO: Found applicable config definition build:cuda in file /mnt/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
INFO: Found applicable config definition build:cuda in file /mnt/.tf_configure.bazelrc: --repo_env HERMETIC_CUDA_VERSION=12.5.1 --repo_env HERMETIC_CUDNN_VERSION=9.3.0 --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES=8.6
INFO: Found applicable config definition build:cuda in file /mnt/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
INFO: Found applicable config definition build:cuda in file /mnt/.tf_configure.bazelrc: --repo_env HERMETIC_CUDA_VERSION=12.5.1 --repo_env HERMETIC_CUDNN_VERSION=9.3.0 --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES=8.6
INFO: Found applicable config definition build:cuda_wheel in file /mnt/.bazelrc: --@local_config_cuda//cuda:include_cuda_libs=false
INFO: Found applicable config definition build:opt in file /mnt/.tf_configure.bazelrc: --copt=-march=native --host_copt=-march=native
INFO: Found applicable config definition build:linux in file /mnt/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /mnt/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_xla/third_party/py/python_repo.bzl:96:14:
HERMETIC_PYTHON_VERSION variable was not set correctly, using default version.
Python 3.9 will be used.
To select Python version, either set HERMETIC_PYTHON_VERSION env variable in
your shell:
export HERMETIC_PYTHON_VERSION=3.12
OR pass it as an argument to bazel command directly or inside your .bazelrc
file:
--repo_env=HERMETIC_PYTHON_VERSION=3.12
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_xla/third_party/py/python_repo.bzl:107:10: Using hermetic Python 3.9
WARNING: The following configs were expanded more than once: [cuda_clang, cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_cupti/linux-x86_64/cuda_cupti-linux-x86_64-12.5.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_nvtx/linux-x86_64/cuda_nvtx-linux-x86_64-12.5.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/libcusolver/linux-x86_64/libcusolver-linux-x86_64-11.6.3.83-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/libnvjitlink/linux-x86_64/libnvjitlink-linux-x86_64-12.5.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/libcurand/linux-x86_64/libcurand-linux-x86_64-10.3.6.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_cudart/linux-x86_64/cuda_cudart-linux-x86_64-12.5.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-9.3.0.75_cuda12-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/libcusparse/linux-x86_64/libcusparse-linux-x86_64-12.5.1.3-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_nvml_dev/linux-x86_64/cuda_nvml_dev-linux-x86_64-12.5.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/libcublas/linux-x86_64/libcublas-linux-x86_64-12.5.3.2-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/libcufft/linux-x86_64/libcufft-linux-x86_64-11.2.3.61-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_nvcc/linux-x86_64/cuda_nvcc-linux-x86_64-12.5.82-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_cccl/linux-x86_64/cuda_cccl-linux-x86_64-12.5.39-archive.tar.xz
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/nccl/hermetic/nccl_redist_init_repository.bzl:73:10: Downloading and extracting https://files.pythonhosted.org/packages/df/99/12cd266d6233f47d00daf3a72739872bdc10267d0383508b0b9c84a18bb6/nvidia_nccl_cu12-2.21.5-py3-none-manylinux2014_x86_64.whl
DEBUG: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/local_tsl/third_party/gpus/cuda/hermetic/cuda_redist_init_repositories.bzl:269:10: Downloading and extracting https://developer.download.nvidia.com/compute/cuda/redist/cuda_nvprune/linux-x86_64/cuda_nvprune-linux-x86_64-12.5.82-archive.tar.xz
INFO: Analyzed target //tensorflow/tools/pip_package:wheel (759 packages loaded, 56246 targets configured).
INFO: Found 1 target...
ERROR: /root/.cache/bazel/_bazel_root/39de0dbcfb68c8735bd088c62fa061a4/external/upb/BUILD:57:11: Compiling upb/upb.c failed: (Exit 1): clang failed: error executing command (from target @upb//:upb) /usr/lib/llvm-18/bin/clang -MD -MF bazel-out/k8-opt/bin/external/upb/_objs/upb/upb.pic.d '-frandom-seed=bazel-out/k8-opt/bin/external/upb/_objs/upb/upb.pic.o' '-DBAZEL_CURRENT_REPOSITORY="upb"' -iquote ... (remaining 44 arguments skipped)
external/upb/upb/upb.c:192:10: error: defining a type within 'offsetof' is a Clang extension [-Werror,-Wgnu-offsetof-extensions]
192 | n &= ~(upb_alignof(upb_arena) - 1);
| ^~~~~~~~~~~~~~~~~~~~~~
external/upb/upb/upb.c:183:37: note: expanded from macro 'upb_alignof'
183 | #define upb_alignof(type) offsetof (struct { char c; type member; }, member)
| ^~~~~~
/usr/lib/llvm-18/lib/clang/18/include/__stddef_offsetof.h:16:43: note: expanded from macro 'offsetof'
16 | #define offsetof(t, d) __builtin_offsetof(t, d)
| ^
1 error generated.
Target //tensorflow/tools/pip_package:wheel failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 524.930s, Critical Path: 5.67s
INFO: 1437 processes: 820 internal, 617 local.
FAILED: Build did NOT complete successfully
```
| stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,TF 2.18 | low | Critical |
2,645,993,038 | bitcoin | importdescriptors always rescans | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behaviour
Running `importdescriptors` with `timestamp: now` rescans the last 2 hours worth of blocks.
The docs say that '"now" can be specified to bypass scanning' but that doesn't appear to be true:
```
"timestamp": timestamp | "now", (integer / string, required) Time from which to start rescanning the blockchain for this descriptor, in UNIX epoch time
Use the string "now" to substitute the current synced blockchain time.
"now" can be specified to bypass scanning, for outputs which are known to never have been used, and
0 can be specified to scan the entire blockchain. Blocks up to 2 hours before the earliest timestamp
of all descriptors being imported will be scanned as well as the mempool.
```
I also tried using timestamps several days into the future, but still see about 2 hours worth of blocks being rescanned. I want to be able to import a description without triggering any block rescan.
### Expected behaviour
I expect the behavior to match the docs.
### Steps to reproduce
1. Run importdescriptors with the timestamp set to "now"
2. See in the log that the wallet rescanned some blocks
### Relevant log output
_No response_
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
v28.0
### Operating system and version
Some kind of Debian
### Machine specifications
_No response_ | Feature,Wallet,RPC/REST/ZMQ | low | Major |
2,645,994,620 | excalidraw | Pan with one finger in pen mode | This is probably my biggest ux drawback when using excalidraw on any tablet.
It would be perfect if the app could use one finger to pan when pen mode is enabled, and leave two fingers touch like it is now, pan and zoom.
Is there any decision by design of why this is not yet implemented? | enhancement | low | Minor |
2,645,997,342 | godot | Isometric tiles atlas creates ghost tiles by default. | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
- Non reproducible in: v4.2 (as TileMapLayer does not exist)
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3050 Ti Laptop GPU (NVIDIA; 32.0.15.6094) - 12th Gen Intel(R) Core(TM) i7-12700H (20 Threads)
### Issue description
I tried adding a simple isometric tile to the atlas, with the following configurations:

And also, the tileSet config is:

As you can see, a weird ghost tile is created beneath the actual one. This ghost tile cannot be selected, erased, or modified at all, but it is there. The proof to that is that, when you try to run this, without any tile added to the scene, it tries to collect the atlas index for that ghost tile and an error is raised:
> E 0:00:00:0638 create_tile: Cannot create tile. The tile is outside the texture or tiles are already present in the space the tile would cover.
<C++ Error> Condition "!room_for_tile" is true.
<C++ Source> scene/resources/2d/tile_set.cpp:4963 @ create_tile()
> E 0:00:00:0638 has_alternative_tile: The TileSetAtlasSource atlas has no tile at (0, 1).
<C++ Error> Condition "!tiles.has(p_atlas_coords)" is true. Returning: false
<C++ Source> scene/resources/2d/tile_set.cpp:5400 @ has_alternative_tile()
> E 0:00:00:0638 create_alternative_tile: TileSetAtlasSource has no tile at (0, 1).
<C++ Error> Condition "!tiles.has(p_atlas_coords)" is true. Returning: TileSetSource::INVALID_TILE_ALTERNATIVE
<C++ Source> scene/resources/2d/tile_set.cpp:5348 @ create_alternative_tile()
### Steps to reproduce
1.- Create a new project, add a TileMapLayer node as the main scene.
2.- Add the TileSet to the node, and configure the following params:

3.- Add a single tiled texture as the tile atlas of the TileSet.
4.- Configure the following params for the texture:

5.- Run
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability,topic:2d | low | Critical |
2,646,041,413 | neovim | Segfault when printing in cmdheight=0 and then switching to another tabpage with different cmdheight | ### Problem
If you print in a tabpage with `cmdheight=0` and then switch to another tabpage with a different cmdheight before a redraw happens then segfault.
### Steps to reproduce
```vim
nvim --clean
:lua t=vim.api.nvim_get_current_tabpage() vim.cmd.tabnew() vim.o.cmdheight=0 print('t') vim.api.nvim_set_current_tabpage(t)
```
### Expected behavior
No segfault.
### Nvim version (nvim -v)
0.11.0-dev-dd4c828
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
linux 6.11.6-zen1-1-zen
### Terminal name/version
kitty 0.37.0
### $TERM environment variable
xterm-kitty
### Installation
build from repo | bug-crash,messages | low | Minor |
2,646,091,867 | go | crypto/x509: malformed signature algorithm identifier error while version and serialNumber swapped | ### Go version
go version go1.23.2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/liu/.cache/go-build'
GOENV='/home/liu/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/liu/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/liu/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/snap/go/10730'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/snap/go/10730/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/liu/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build578094757=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Use x509.ParseCertificate to parse the der certificate.
The version and serial number of the certificate are swapped.

[case.zip](https://github.com/user-attachments/files/17687673/case.zip)
### What did you see happen?
Parse error:malformed signature algorithm identifier
### What did you expect to see?
Swap the positions of serialnumber and version, and the value of serialnumber is the same as version. Since the type of version is OPTIONAL[0] + INTEGER, and the type of serialnumber is INTEGER, the invalid version should be checked first instead of throwing an alformed signature algorithm identifier. | NeedsInvestigation | low | Critical |
2,646,092,671 | neovim | Way to get tabpage local option | ### Problem
There is no way (to my knowledge) to get a tabpage local option without switching to that tabpage **directly**.
<details><summary>example code of trying to get tabpage local option another way</summary>
```lua
---SETUP
vim.o.cmdheight=1
vim.cmd.tabnew()
vim.o.cmdheight=0
local tabpage=vim.api.nvim_get_current_tabpage()
vim.cmd.tabprevious()
---SETUP END
assert(tabpage~=vim.api.nvim_get_current_tabpage(),'Current tabpage is different than wanted tabpage')
vim.api.nvim_win_call(vim.api.nvim_tabpage_get_win(tabpage),function()
assert(tabpage==vim.api.nvim_get_current_tabpage(),'Current tabpage is same as wanted tabpage')
-- This is probably a bug, as with window/buffer local options, they return current windows/buffers local option
assert(vim.opt_local.cmdheight:get()==0,'Local option is same as wanted tabpage local option')
end)
```
</details>
### Expected behavior
Make it so that `nvim_get_option_value` can take a `tabpage` parameter to return tabpage local options.
Possibly implement `vim.to`
**Worth the cost?**
To my knowledge, there's only one tabpage local option: `cmdheight`, so it wouldn't be that useful. | api,bug-vim,complexity:low,options | low | Critical |
2,646,100,818 | vscode | Accept Current Change / Accept Incoming Change: add tooltip | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
When doing a merge or rebase you may get merge conflicts.
In VS Code this is presented with a window showing "Accept Current Change" and "Accept Incoming Change".
It may be helpful if it told you, in parenthesis or with a tooltip, which branch each is so you can differentiate them. | feature-request,ux,merge-conflict | low | Minor |
2,646,199,142 | yt-dlp | Add a section with mostly used options to help new users | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
There are too many options in the list to navigate, we need a section with mostly used/important options for new users to get started quickly.
Example: -f 'bv*+ba' -o %(title)s %(uploader)s %(upload_date)s %(ext)s
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | docs/meta/cleanup,help-wanted,wiki | low | Critical |
2,646,199,822 | PowerToys | Key/shortcuts remappings randomly/specifically not working | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
The remapping sometimes not working, but I can assure everytime after I opened WeChat (Whatever Windows Store or Web downloaded version or its test build), the remapping stopped to work. After I restart PowerToys it's worked again.
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,646,213,977 | godot | Inconsistent behavior for MeshInstance3D (Visibility Range) when OmniLight3D and DirectionalLight3D create Shadows | ### Tested versions
4.3 stable, 4.4dev4
### System information
Linux
### Issue description
Using Visibility Range Begin and End to hide an Object always creates a shadow for OmniLight3D, even if the camera is far away and the Object becomes hidden by the set Visibility Range. DirectionalLight3D only creates a shadow for the object if it was not hidden by Visibility Range.
Use case:
This is an issue for me because I would like to keep using Visibility Range to hide a 3D objects Mesh so it only creates a shadow but is not visible to the user. There is no LightOccluder3D Node in Godot and setting the Mesh transparency = 1.0 has been a performance drain (I use many light blocker mesh instances)
### Steps to reproduce
Set a MeshInstance3D Visibility Range between 1.0 and 2.0 and zoom in and out with the camera and take note of the shadow on the floor (OmniLight3D = always shadow, DirectionalLight3D = shadow only if mesh is visible). The behavior is present in the editor and when running the project.
### Minimal reproduction project (MRP)
[lightingtest.zip](https://github.com/user-attachments/files/17688518/lightingtest.zip)
| bug,topic:rendering,confirmed,topic:3d | low | Major |
2,646,216,072 | kubernetes | [Flaking Test] `TestSelectableFields` is flaky | found in https://github.com/kubernetes/kubernetes/pull/128722#issuecomment-2466253700
also seen in https://storage.googleapis.com/k8s-triage/index.html?pr=1&text=TestSelectableFields | sig/api-machinery,triage/accepted | low | Major |
2,646,260,458 | TypeScript | `[...T[], T]` does not extend `[T, ...T[]]` | ### ๐ Search Terms
homogeneous tuple, non-empty array, assert not empty
### ๐ Version & Regression Information
This is the behavior in every version I tried, and I reviewed the FAQ for entries about array/tuple
### โฏ Playground Link
https://www.typescriptlang.org/play/?jsx=0&ts=5.7.0-beta#code/C4TwDgpgBAKhDOwoF4oG0B0WCMaC6ANFNnlBAB7AQB2AJvOtkVhrnqQPxTABOArtABcUAGYBDADbwIAbgBQAegVQVAPQ5zNAYwD21RFACW8AKIBbMKBRQAPDAB8ACnLxhMfAEphLow3wp7KABCFwwJGgBzYAALOV19JAkxA1Q7JxdhNBhmLHd2DwCoFzRQ8Ooo6KgAWmI8eTlQSCgAMR0da38AH3RqPjMAIwgeHIxegaH8OrlaCC0knmh4gwyWtvkZubEFqCWkEFcoMcGefE1DEUcg43NLEGd4DwKAbzkVFV2oAC8D1vbUTCw+yISUQ9w8pGSOz0iHkAF8gA
### ๐ป Code
```ts
type Test = [...1[], 1] extends [1, ...1[]] ? true : false;
// ^? type Test = false
```
In context:
```ts
const isEmpty = <T>(xs: T[]): xs is [] => !xs.length
const last = <T>(xs: [T, ...T[]]) => xs[xs.length - 1];
type Foo = [] | [number, ...number[]];
declare const xs: Foo;
declare const ys: number[]
if(!isEmpty(xs)) {
const zs: Foo = [...ys, last(xs)] as const;
// ~~
// Type '[...number[], number]' is not assignable to type 'Foo'.
// Type '[...number[], number]' is not assignable to type '[number, ...number[]]'.
// Source provides no match for required element at position 0 in target.(2322)
}
```
### ๐ Actual behavior
For a homogeneous tuple, `[...T[], T]` and `[T, ...T[]]` are the same thing.
TS fails to recognise that in the in-context example: it is saying that `[...ys, last(xs)]` could be missing a required `number` but it can't because `last(xs)` is the required `number` that it is looking for. The only issue is that it is of type `[...number[], number]` and since `[...T[], U]` is not the same thing as `[T, ...U[]]`, It chokes.
### ๐ Expected behavior
`Test` should be `true` and `zs: Foo` should not be a type error.
### Additional information about the issue
I wish `!isEmpty` could narrow `T[]` to `[T, ...T[]]`.
Using a `isNotEmpty` helper with `Foo` defined as `number[]` would solve this specific case but in the wild it would force me to call it like so `!isNotEmpty(xs)` in many places, which is terrible for readability.
It would be possible in the in-context example to define `Foo` as `[] | [number, ...number[]] | [...number[], number]` and `last` as `<T>(xs: [T, ...T[]] | [...T[], T]) =>T` but nobody in the world is doing that and I would be splitting hair alone in my basement not being able to use any library.
I think I trip over this more often in recursive logic. | Help Wanted,Possible Improvement | low | Critical |
2,646,275,117 | transformers | `DataCollatorForMultipleChoice` exists in the docs but not in the package | ### Feature request
Move the `DataCollatorForMultipleChoice` implementation from the docs to `transformers.data.data_collator.py`.
### Motivation
The [`transformers` docs](https://huggingface.co/docs/transformers/tasks/multiple_choice) provide all collator code needed to run `ForMultipleChoice` fine-tuning for datasets like SWAG. The docs say that *"`transformers` doesnโt have a data collator for multiple choice, so youโll need to adapt the `DataCollatorWithPadding` to create a batch of examples"*, but... why? Why not add the code given in the docs to the package? It's the same repo...
https://github.com/huggingface/transformers/blob/a06a0d12636756352494b99b5b264ac9955bc735/docs/source/en/tasks/multiple_choice.md?plain=1#L115-L160
### Your contribution
None, the code is already there. | Documentation,Feature request | low | Minor |
2,646,327,488 | angular | Can I use "Angular" in my product name? | Hello everyone! Not sure where else I can ask this, but I would like to know if I am allowed to use "Angular" as part of the name of a commercial product? Do I need legal permission to do so?
I also wanted to ask if I am allowed to use Angular styles in my app, like colors, fonts, padding and margins and so on.
I've already seen this page: https://angular.dev/press-kit, but Angular is also a registered trademark (https://www.trademarkia.com/angular-90051986), so I wanted to consult the community before using it as a part of the product's name commercially.
Thank you for your help!
Best Regards
Viktoriia | hotlist: devrel,area: docs | low | Minor |
2,646,332,788 | tauri | [bug] failed to bundle project: failed to decode certificate | ### Describe the bug
https://github.com/mediar-ai/screenpipe
getting
`failed to bundle project: failed to decode certificate`
when running `bun tauri build`
`bun tauri dev` works
i tried to remove updater config, turn off signing, or such, without success
considering this is very likely a problem on my side but i did not find any similar issue neither on github or discord so if you have any idea how i could fix/investigate this?
our updater and apple signing works well in production so i doubt it is broken certificates?
maybe some dependency i need to update or anything?
other people with same computer than me don't have this issue for the same project
### Reproduction
I don't know
### Expected behavior
_No response_
### Full `tauri info` output
```text
(env) (base) louisbeaumont@mac:~/Documents/screen-pipe/screenpipe-app-tauri$ bun tauri info
$ echo 'Warning: did you run bun scripts/pre_build.js and ALL other instructions carefully listed in https://docs.screenpi.pe/docs/getting-started ?' && tauri info
Warning: did you run bun scripts/pre_build.js and ALL other instructions carefully listed in https://docs.screenpi.pe/docs/getting-started ?
WARNING: Only one package manager should be used, but found npm and pnpm and bun.
Please remove unused package manager lock files, will use npm for now!
[โ] Environment
- OS: Mac OS 15.1.0 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.9.0
- pnpm: 8.7.4
- yarn: 1.22.19
- npm: 10.1.0
- bun: 1.1.21
[-] Packages
- tauri ๐ฆ: 2.0.0
- tauri-build ๐ฆ: 2.0.0
- wry ๐ฆ: 0.44.1
- tao ๐ฆ: 0.30.2
- tauri-cli ๐ฆ: 1.6.2
- @tauri-apps/api ๎: 2.0.1 (outdated, latest: 2.1.0)
- @tauri-apps/cli ๎: 2.0.0 (outdated, latest: 2.1.0)
[-] Plugins
- tauri-plugin-store ๐ฆ: 2.0.0
- @tauri-apps/plugin-store ๎: 2.0.0 (outdated, latest: 2.1.0)
- tauri-plugin-single-instance ๐ฆ: 2.0.0
- @tauri-apps/plugin-single-instance ๎: not installed!
- tauri-plugin-autostart ๐ฆ: 2.0.0
- @tauri-apps/plugin-autostart ๎: not installed!
- tauri-plugin-fs ๐ฆ: 2.0.0
- @tauri-apps/plugin-fs ๎: 2.0.0 (outdated, latest: 2.0.2)
- tauri-plugin-dialog ๐ฆ: 2.0.0
- @tauri-apps/plugin-dialog ๎: 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-cli ๐ฆ: 2.0.0
- @tauri-apps/plugin-cli ๎: 2.0.0
- tauri-plugin-notification ๐ฆ: 2.0.0
- @tauri-apps/plugin-notification ๎: 2.0.0
- tauri-plugin-shell ๐ฆ: 2.0.0
- @tauri-apps/plugin-shell ๎: 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-process ๐ฆ: 2.0.0
- @tauri-apps/plugin-process ๎: 2.0.0
- tauri-plugin-global-shortcut ๐ฆ: 2.0.0
- @tauri-apps/plugin-global-shortcut ๎: 2.0.0
- tauri-plugin-updater ๐ฆ: 2.0.1
- @tauri-apps/plugin-updater ๎: 2.0.0
- tauri-plugin-os ๐ฆ: 2.0.0
- @tauri-apps/plugin-os ๎: 2.0.0
[-] App
- build-type: bundle
- CSP: frame-src https://www.youtube.com http://localhost:*; img-src 'self' asset: http://asset.localhost blob: data: https://*.githubusercontent.com https://*.github.com https://github.com https://*.s3.amazonaws.com; font-src https://fonts.gstatic.com tauri://localhost http://tauri.localhost; default-src 'self' customprotocol: asset:; connect-src ipc: http://ipc.localhost https://youtube.com https://api.openai.com http://localhost:3030 https://web.crabnebula.cloud https://api.github.com https://eu.i.posthog.com https://github.com https://*.githubusercontent.com https://*.github.com http://*:11434 http://*:9000 https://ai-proxy.i-f9f.workers.dev *; style-src 'unsafe-inline' 'self' https://fonts.googleapis.com tauri://localhost http://tauri.localhost http://localhost:* data: *; media-src 'self' mediadevices: asset: http://asset.localhost file: blob: tauri://localhost file: blob: https://youtube.com https://github.com https://youtu.be
- frontendDist: ../out
- devUrl: http://localhost:3000/
- framework: React (Next.js)
- bundler: Webpack
(env) (base) louisbeaumont@mac:~/Documents/screen-pipe/screenpipe-app-tauri$
```
### Stack trace
```text
(env) (base) louisbeaumont@mac:~/Documents/screen-pipe/screenpipe-app-tauri$ bun tauri build -v
$ echo 'Warning: did you run bun scripts/pre_build.js and ALL other instructions carefully listed in https://docs.screenpi.pe/docs/getting-started ?' && tauri build -v
Warning: did you run bun scripts/pre_build.js and ALL other instructions carefully listed in https://docs.screenpi.pe/docs/getting-started ?
Running [tauri_cli::helpers] beforeBuildCommand `bun run build`
Debug [tauri_cli::helpers] Setting environment for hook {"TAURI_ENV_PLATFORM_VERSION": "15.1.0", "TAURI_ENV_FAMILY": "unix", "TAURI_ENV_ARCH": "aarch64", "TAURI_ENV_PLATFORM": "darwin", "TAURI_ENV_TARGET_TRIPLE": "aarch64-apple-darwin"}
Running [tauri_cli] Command `sh -c bun run build`
$ bun scripts/pre_build.js
cwd /Users/louisbeaumont/Documents/screen-pipe/screenpipe-app-tauri/src-tauri
Setting up screenpipe bin for arm64...
Copied most recent arm64 screenpipe binary from ../../target/release/screenpipe
Updated dylib paths for arm64 in dev mode
screenpipe for arm64 set up successfully.
Setting up screenpipe bin for x86_64...
No suitable x86_64 screenpipe binary found
screenpipe for x86_64 set up successfully.
FFMPEG already exists
Moved and renamed ffmpeg binary for externalBin
Commands to build ๐จ:
bun install
bun tauri build
bun is already installed.
checking bun binary for tauri...
bun binary already exists for tauri.
ollama sidecar already exists. skipping installation.
$ next build
โฒ Next.js 14.2.4
- Environments: .env
Creating an optimized production build ...
โ Compiled successfully
./app/timeline/page.tsx
180:38 Warning: The ref value 'retryTimeoutRef.current' will likely have changed by the time this effect cleanup function runs. If this ref points to a node rendered by React, copy 'retryTimeoutRef.current' to a variable inside the effect, and use that variable in the cleanup function. react-hooks/exhaustive-deps
326:13 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/meeting-history.tsx
168:6 Warning: React Hook useEffect has a missing dependency: 'loadMeetings'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
./components/onboarding/api-setup.tsx
281:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/dev-configuration.tsx
88:6 Warning: React Hook useEffect has a missing dependency: 'devInstructionsData'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
93:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/dev-or-non-dev.tsx
109:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/explain-instructions.tsx
27:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/introduction.tsx
18:7 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/personalize.tsx
84:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/pipes.tsx
22:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/status.tsx
154:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/onboarding/usecases-selection.tsx
97:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/pipe-store.tsx
102:6 Warning: React Hook useEffect has a missing dependency: 'fetchInstalledPipes'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
563:21 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/recording-settings.tsx
201:6 Warning: React Hook useEffect has missing dependencies: 'localSettings' and 'setLocalSettings'. Either include them or remove the dependency array. If 'setLocalSettings' changes too often, find the parent component that defines it and wrap that definition in useCallback. react-hooks/exhaustive-deps
./components/search-chat.tsx
395:6 Warning: React Hook useEffect has a missing dependency: 'handleFilterDuplicates'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
859:31 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
866:31 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/timeline/timeline-dock-section.tsx
193:17 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element
./components/video.tsx
81:6 Warning: React Hook useEffect has missing dependencies: 'getMimeType' and 'mediaSrc'. Either include them or remove the dependency array. react-hooks/exhaustive-deps
./lib/hooks/use-pipes.tsx
308:6 Warning: React Hook useEffect has missing dependencies: 'loading' and 'pipes'. Either include them or remove the dependency array. You can also do a functional update 'setPipes(p => ...)' if you only need 'pipes' in the 'setPipes' call. react-hooks/exhaustive-deps
info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
โ Linting and checking validity of types
โ Collecting page data
โ Generating static pages (6/6)
โ Collecting build traces
โ Finalizing page optimization
Route (app) Size First Load JS
โ โ / 158 kB 706 kB
โ โ /_not-found 142 B 87.5 kB
โ โ /timeline 9.4 kB 521 kB
+ First Load JS shared by all 87.4 kB
โ chunks/145-58c11cb1cc61485d.js 31.8 kB
โ chunks/5f51b337-170ce1bbdb264049.js 53.7 kB
โ other shared chunks (total) 1.97 kB
โ (Static) prerendered as static content
Running [tauri_cli] Command `cargo build --bins --features tauri/custom-protocol,tauri/native-tls --release`
Compiling screenpipe-app v0.10.0 (/Users/louisbeaumont/Documents/screen-pipe/screenpipe-app-tauri/src-tauri)
Finished `release` profile [optimized] target(s) in 28.26s
Built [tauri_cli::build] application at: /Users/louisbeaumont/Documents/screen-pipe/screenpipe-app-tauri/src-tauri/target/release/screenpipe-app
Bundling [tauri_bundler::bundle::macos::app] screenpipe.app (/Users/louisbeaumont/Documents/screen-pipe/screenpipe-app-tauri/src-tauri/target/release/bundle/macos/screenpipe.app)
Running [tauri_macos_sign] Command `base64 --decode -i /var/folders/hq/swhzs3js29z7mb5b7ycnjcnm0000gn/T/.tmpx2Oz9R/src -o /var/folders/hq/swhzs3js29z7mb5b7ycnjcnm0000gn/T/.tmpSuFtHm/cert.p12`
base64: /var/folders/hq/swhzs3js29z7mb5b7ycnjcnm0000gn/T/.tmpx2Oz9R/src: (null): error decoding base64 input stream
failed to bundle project: failed to decode certificate
Error [tauri_cli_node] failed to bundle project: failed to decode certificate
error: script "tauri" exited with code 1
(env) (base) louisbeaumont@mac:~/Documents/screen-pipe/screenpipe-app-tauri$
```
### Additional context
macos m3 max 15.1 36gb | type: bug,platform: macOS,status: needs triage | low | Critical |
2,646,358,692 | react | Bug: Change status variable type | I noticed the status variable in the ReactPromise function of the ReactFlightClient file, is of type any.
If the status is always of type String, it would be a good idea to replace `any` with `string`. This helps provide stronger type checking, reduces the risk of bugs, and improves code readability.
[ReactFlightClient.js](https://github.com/facebook/react/blob/main/packages/react-client/src/ReactFlightClient.js)
line: 206

| Status: Unconfirmed | low | Critical |
2,646,380,894 | vscode | ${env:path} not evaluted in "terminal.integrated.env.windows" if ${workspaceFolder} doesnโt exist | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.2
- OS Version: Windows 10 22H2
### Steps to Reproduce:
Using this settings when opening VSCode with no folder opened in it.
```json
"terminal.integrated.defaultProfile.windows": "PowerShell",
"terminal.integrated.env.windows": {
"PATH": "${workspaceFolder}\\node_modules\\.bin;${env:path}"
},
```
will result:

because `${workspaceFolder}` doesnโt exist.
Using something like `"PATH": "c:\\any\\other\\path\\bin;${env:PATH}"` will work fine.
### Suggestion
If `${workspaceFolder}` can not be evaluated, pass it as a string and continue to try to evaluate other variables in `"PATH"`.
| bug,confirmation-pending,terminal-process | low | Critical |
2,646,422,331 | rust | Tracking Issue for `io_slice_as_bytes` |
Feature gate: `#![feature(io_slice_as_bytes)]`
This is a tracking issue for getting the underlying slice out of an `IoSlice`/`IoSliceMut`
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
// std::io
impl<'a> IoSlice<'a> {
pub const fn as_slice(self) -> &'a [u8] {
}
impl<'a> IoSliceMut<'a> {
pub const fn into_slice(self) -> &'a mut [u8];
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: https://github.com/rust-lang/libs-team/issues/93
- [x] Implementation: #132790
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,646,425,522 | PowerToys | Request advanced mouse utilities: Virtual Border for mouse during multi-screen enabled | ### Description of the new feature / enhancement
The utilities I wish to have is one enhanced mouse tools:
- the virtual border for each connected screen (temp. name: Virtual Border).
- this is a limit for mouse preventing mouse moved to other enabled screens unexpected, for those who currently just need to work in one screen but need another screen also enabled (for example, another screen is showing a reference)
- this can be triggered on by keyboard shortcut or by focused window (application or window title name (better supported with regex))
### Scenario when this would be used?
*using the temporary name to target the new requested functions.*
### Virtual Border
Sometimes I need to keep working and operating windows in one exactly screen, but sometimes the mouse suddenly moved to other screen, maybe a mistake, maybe caused by thoughtless, maybe by even more other situations. I just hope when I triggered the function on, the mouse will stay in the screen where it located right now as if I'm just currently using this one screen.
Then why don't I disconnect other screen? Well, other screen can be my place to show some dynamic-able reference, I need it connected to main machine and powered on. It's also a kind of usage.
### Supporting information
well, many of us using multi screen, right?
but currently I can only represent myself :P | Needs-Triage | low | Minor |
2,646,458,661 | ui | [bug]: Sidebar horizontal scroll bar appear when expand | ### Describe the bug
When the sidebar transitions from collapsed to expanded state, a horizontal scrollbar briefly appears for about 1 second during the animation. This creates an undesirable visual effect and slight layout shift.
### Affected component/components
- Sidebar - SidebarContent
### How to reproduce
1. Implement the Sidebar component with collapsible functionality
2. Set the initial state to collapsed
3. Click the toggle button to expand the sidebar
4. Observe the brief appearance of a horizontal scrollbar during the expansion animation
### Codesandbox/StackBlitz link
import { Sidebar, SidebarContent, SidebarTrigger } from "@/components/ui/sidebar" export default function Demo() { return ( <SidebarProvider> <Sidebar> <SidebarContent> {/ Content that might cause overflow /} <div className="w-full"> <p>Some content here...</p> </div> </SidebarContent> </Sidebar> <SidebarTrigger /> </SidebarProvider> ) }
### Logs
_No response_
### System Info
```bash
- Browser: Chrome, Firefox, Safari (issue appears in all major browsers)
- OS: Cross-platform issue
- React version: 18.x
- Next.js version: 15.x
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,646,465,070 | pytorch | torch.svd_lowrank - non reproducible | ### ๐ Describe the bug
Hey there!
When using torch.svd_lowrank i cannot get the same results for the same torch.spare_coo tensor.
Note, this issue only arises with rather large tensors (that are still considered sparse)
When using `sparse_matrix.to_dense()` it is reproducible, but this is obviously non-desired.
```python
import os
import torch
import random
import numpy as np
def seeder():
# Ensure deterministic behavior for cuBLAS
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
# Set random seeds for reproducibility
# Set seeds again before each call
torch.manual_seed(42)
torch.cuda.manual_seed(42)
np.random.seed(42)
random.seed(42)
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Optional: Force single-threaded execution
torch.set_num_threads(1)
seeder()
# Set device to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
indices = torch.randint(0, 48362, (2, 781_000), dtype=torch.long).to(device) # 10000 non-zero elements
values = torch.randn(781_000).to(device) # Random values for the non-zero elements
sparse_matrix = torch.sparse_coo_tensor(indices, values, (48362, 48362), device=device)
print(f"sparse_matrix = {sparse_matrix}")
# Parameters for low-rank SVD
q = 512 # Rank for approximation
# Try disabling power iterations
niter = 0
# Perform low-rank SVD on the dense matrix
U1, S1, V1 = torch.svd_lowrank(sparse_matrix, q=q, niter=niter)
seeder()
U2, S2, V2 = torch.svd_lowrank(sparse_matrix, q=q, niter=niter)
# Print results
print(f"{U1.sum()=}")
print(f"{U2.sum()=}")
print(f"{U2.sum() == U1.sum()=}")
```
I will add that i have also ran it with different versions of pytorch and CUDA.
Thank you so much !
### Versions
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 4
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 6599.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 10 MiB (10 instances)
L3 cache: 13.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.1.0+cu118
[pip3] torch-cluster==1.6.3+pt21cu118
[pip3] torch-geometric==2.6.1
[pip3] torch-scatter==2.1.2+pt21cu118
[pip3] torch-sparse==0.6.18+pt21cu118
[pip3] torch-spline-conv==1.2.2+pt21cu118
[pip3] torchaudio==2.1.0+cu118
[pip3] torchvision==0.16.0+cu118
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.10 py312h5eee18b_0
[conda] mkl_random 1.2.7 py312h526ad5a_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] pytorch 2.4.1 py3.12_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.1 py312_cu121 pytorch
[conda] torchtriton 3.0.0 py312 pytorch
[conda] torchvision 0.19.1 py312_cu121 pytorch
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ptrblck @msaroufim @jianyuh @mruberry @walterddr @xwang233 @Lezcano | module: sparse,module: cuda,triaged,module: linear algebra | low | Critical |
2,646,476,285 | angular | Export signalAsReadonlyFn | ### Which @angular/* package(s) are relevant/related to the feature request?
@angular/core
### Description
Please consider exporting [signalAsReadonlyFn](https://github.com/angular/angular/blob/bc5aa3ceda035b3a2d9b6edda7b7537c2270ced8/packages/core/src/render3/reactivity/signal.ts#L98C17-L98C35) as part of the [reactivity export](https://github.com/angular/angular/blob/bc5aa3ceda035b3a2d9b6edda7b7537c2270ced8/packages/core/src/core_reactivity_export_internal.ts#L13). This would be useful for anybody trying to create a their own custom signals such as with https://github.com/DDtMM/angular-signal-generators, and seems reasonable since its used in your own experimental signal, linkedSignal.
### Proposed solution
Add _signalAsReadonlyFn_ to the list of exports in core_reactivity_export_internal from ./render3/reactivity/signal.
### Alternatives considered
Writing my own version of this function, which I have. I just hate recreating the wheel. | area: core,core: reactivity,cross-cutting: signals | low | Minor |
2,646,538,915 | ollama | GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed | ### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
```
`ollama run llama3.2` and `ollama run llava` works fine.
I have an i7-6700K and a GeForce GTX 1060 6GB. I installed ollama using `pacman -S ollama`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1 | bug | medium | Critical |
2,646,550,731 | yt-dlp | Postprocessing: Conversion failed! on already downloaded .opus files when using `--embed-metadata` and `--embed-thumbnail` | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
The initial download works fine, and adds the thumbnail and other metadata to the downloaded file. However when the command is run again and the file already exists, it gives this error and leaves behind temporary files. I haven't tested many different codecs, but I can confirm that this happens with .opus files and does not happen with .mp3 files. It only happens when both `--embed-metadata` and `--embed-thumbnail` are enabled.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp https://soundcloud.com/one-thousand-and-one/test-track --embed-metadata --embed-thumbnail -vU
[debug] Command-line config: ['https://soundcloud.com/one-thousand-and-one/test-track', '--embed-metadata', '--embed-thumbnail', '-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.11.04 from yt-dlp/yt-dlp [197d0b03b] (pip)
[debug] Python 3.13.0 (CPython arm64 64bit) - macOS-14.4-arm64-arm-64bit-Mach-O (OpenSSL 3.4.0 22 Oct 2024)
[debug] exe versions: ffmpeg 7.0.1 (setts), ffprobe 7.0.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.11.04 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.11.04 from yt-dlp/yt-dlp)
[debug] Loading soundcloud.client_id from cache
[soundcloud] Extracting URL: https://soundcloud.com/one-thousand-and-one/test-track
[soundcloud] one-thousand-and-one/test-track: Downloading info JSON
[soundcloud] 1855267053: Downloading hls_mp3 format info JSON
[soundcloud] 1855267053: Downloading http_mp3 format info JSON
[soundcloud] 1855267053: Downloading hls_opus format info JSON
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 1855267053: Downloading 1 format(s): hls_opus_64
Deleting existing file testing - test track [1855267053].jpg
[info] Downloading video thumbnail original ...
[info] Writing video thumbnail original to: testing - test track [1855267053].jpg
[debug] Invoking hlsnative downloader on "https://cf-hls-opus-media.sndcdn.com/playlist/be017829-5fd5-4ba3-96e2-f508d119ff52.64.opus/playlist.m3u8?Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiKjovL2NmLWhscy1vcHVzLW1lZGlhLnNuZGNkbi5jb20vcGxheWxpc3QvYmUwMTc4MjktNWZkNS00YmEzLTk2ZTItZjUwOGQxMTlmZjUyLjY0Lm9wdXMvcGxheWxpc3QubTN1OCoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3MzExODc1Nzd9fX1dfQ__&Signature=QWxWNVVhY23gwkCcIPFU~jauXi6q9dy0HyJqLkfrdrsIqvALxZgYjJhta2EhAx4yOSmiKhNqZskNrupiCiQW9Dda8xqn18QH5Riyk6yEdZDBU54A1L~Q4YTCWalrj9-tkiB8zOqbfvrApbimEgtWIrFomGDMTVyGgd8ioeKsIskTkCnlaI~zBv0k8Mx-lSRxRWrTZc1mWFf7Xq8s8DP6piiBnEKoC0SejyQr4D9eb1lGeH~nNI-m9WIG8yZ4UGXskbC3iXDjkDu5e67~BWKQAt-T7NjrJMMCxIx0zsRCQaPKk~WjB6pDYFXhX2dqpetjE6w1xmXSnczeAdd9cXVKCg__&Key-Pair-Id=APKAI6TU7MMXM5DG6EPQ"
[download] testing - test track [1855267053].opus has already been downloaded
[download] 100% of 16.90KiB
[Metadata] Adding metadata to "testing - test track [1855267053].opus"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:testing - test track [1855267053].opus' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=testing - test track' -metadata date=20240623 -metadata 'description=test description:
9439290883' -metadata 'synopsis=test description:
9439290883' -metadata purl=https://soundcloud.com/one-thousand-and-one/test-track -metadata comment=https://soundcloud.com/one-thousand-and-one/test-track -metadata artist=7x11x13-testing -metadata genre=Testing -movflags +faststart 'file:testing - test track [1855267053].temp.opus'
[debug] ffmpeg version 7.0.1 Copyright (c) 2000-2024 the FFmpeg developers
built with Apple clang version 15.0.0 (clang-1500.3.9.4)
configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.0.1 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
libavutil 59. 8.100 / 59. 8.100
libavcodec 61. 3.100 / 61. 3.100
libavformat 61. 1.100 / 61. 1.100
libavdevice 61. 1.100 / 61. 1.100
libavfilter 10. 1.100 / 10. 1.100
libswscale 8. 1.100 / 8. 1.100
libswresample 5. 1.100 / 5. 1.100
libpostproc 58. 1.100 / 58. 1.100
[ogg @ 0x12e704a60] 1035 bytes of comment header remain
Input #0, ogg, from 'file:testing - test track [1855267053].opus':
Duration: 00:00:01.01, start: 0.000000, bitrate: 137 kb/s
Stream #0:0: Audio: opus, 48000 Hz, stereo, fltp
Metadata:
encoder : Lavc58.91.100 libopus
title : testing - test track
date : 20240623
purl : https://soundcloud.com/one-thousand-and-one/test-track
synopsis : test description:
: 9439290883
genre : Testing
artist : 7x11x13-testing
comment : test description:
: 9439290883
Stream #0:1: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 180x180 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn (attached pic)
Metadata:
comment : Cover (front)
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
[opus @ 0x12e6062b0] Unsupported codec id in stream 1
[out#0/opus @ 0x600003ecc3c0] Could not write header (incorrect codec parameters ?): Invalid argument
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/YoutubeDL.py", line 3557, in process_info
replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/YoutubeDL.py", line 3741, in post_process
info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/YoutubeDL.py", line 3723, in run_all_pps
info = self.run_pp(pp, info)
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/YoutubeDL.py", line 3701, in run_pp
files_to_delete, infodict = pp.run(infodict)
~~~~~~^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/postprocessor/common.py", line 23, in run
ret = func(self, info, *args, **kwargs)
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/postprocessor/common.py", line 128, in wrapper
return func(self, info)
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/postprocessor/ffmpeg.py", line 712, in run
self.run_ffmpeg_multiple_files(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
(filename, metadata_filename), temp_filename,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
itertools.chain(self._options(info['ext']), *options))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/postprocessor/ffmpeg.py", line 330, in run_ffmpeg_multiple_files
return self.real_run_ffmpeg(
~~~~~~~~~~~~~~~~~~~~^
[(path, []) for path in input_paths],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[(out_path, opts)], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.11.4/libexec/lib/python3.13/site-packages/yt_dlp/postprocessor/ffmpeg.py", line 368, in real_run_ffmpeg
raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
```
| bug,external issue,core:post-processor | low | Critical |
2,646,581,487 | rust | Using a trait's Associated Type in a different trait's impl declaration crashes | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
use std::collections::HashMap;
pub trait MyTrait {
type Item;
fn foo(&self, other: &Self) -> ();
}
impl<K,V> MyTrait for HashMap<K, V>
where
K: ::std::hash::Hash,
{
type Item = HashMap<K, ::core::option::Option<V>>;
fn foo(&self, other: &Self) -> () {
}
}
impl<K,V> ::core::convert::From<HashMap<K, V>> for <HashMap<K, V> as MyTrait>::Item
where
K: ::std::hash::Hash,
{
fn from(item: ::core::convert::From<HashMap<K, V>>) -> <HashMap<K, V> as MyTrait>::Item {
()
}
}
fn main() {
println!("Hello, world!");
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
### Error output
```
Compiling rust_bug v0.1.0 (/workspaces/rust_bug)
thread 'rustc' panicked at compiler/rustc_hir_analysis/src/coherence/orphan.rs:554:60:
no entry found for key
stack backtrace:
0: 0x7f8bf48343e5 - std::backtrace_rs::backtrace::libunwind::trace::h649ab3318d3445c5
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x7f8bf48343e5 - std::backtrace_rs::backtrace::trace_unsynchronized::hf4bb60c3387150c3
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x7f8bf48343e5 - std::sys::backtrace::_print_fmt::hd9186c800e44bd00
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:65:5
3: 0x7f8bf48343e5 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h1b9dad2a88e955ff
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:40:26
4: 0x7f8bf4883eeb - core::fmt::rt::Argument::fmt::h351a7824f737a6a0
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/fmt/rt.rs:173:76
5: 0x7f8bf4883eeb - core::fmt::write::h4b5a1270214bc4a7
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/fmt/mod.rs:1182:21
6: 0x7f8bf4828f6f - std::io::Write::write_fmt::hd04af345a50c312d
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/io/mod.rs:1827:15
7: 0x7f8bf4836bd1 - std::sys::backtrace::BacktraceLock::print::h68d41b51481bce5c
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:43:9
8: 0x7f8bf4836bd1 - std::panicking::default_hook::{{closure}}::h96ab15e9936be7ed
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:269:22
9: 0x7f8bf48368ac - std::panicking::default_hook::h3cacb9c27561ad33
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:296:9
10: 0x7f8bf7b3b420 - std[1f2242ed6435445e]::panicking::update_hook::<alloc[7b1462a1eb55c293]::boxed::Box<rustc_driver_impl[8683aa37472b7dde]::install_ice_hook::{closure#0}>>::{closure#0}
11: 0x7f8bf483759f - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::hce7569f4ca5d1b64
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2084:9
12: 0x7f8bf483759f - std::panicking::rust_panic_with_hook::hfe205f6954b2c97b
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:808:13
13: 0x7f8bf48371c7 - std::panicking::begin_panic_handler::{{closure}}::h6cb44b3a50f28c44
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:674:13
14: 0x7f8bf48348a9 - std::sys::backtrace::__rust_end_short_backtrace::hf1c1f2a92799bb0e
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:168:18
15: 0x7f8bf4836e54 - rust_begin_unwind
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:665:5
16: 0x7f8bf48804a3 - core::panicking::panic_fmt::h3d8fc78294164da7
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:74:14
17: 0x7f8bf48802fb - core::panicking::panic_display::h1c0e44fa90890272
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:264:5
18: 0x7f8bf48802fb - core::option::expect_failed::h3a757a693188cc6e
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/option.rs:2030:5
19: 0x7f8bf7cc724d - <rustc_hir_analysis[c1a678e360ecab51]::coherence::orphan::TyVarReplacer as rustc_type_ir[6247808700e50c64]::fold::TypeFolder<rustc_middle[ba2289ab3ae064d4]::ty::context::TyCtxt>>::fold_ty
20: 0x7f8bf7c3ee55 - <&rustc_middle[ba2289ab3ae064d4]::ty::list::RawList<(), rustc_middle[ba2289ab3ae064d4]::ty::generic_args::GenericArg> as rustc_type_ir[6247808700e50c64]::fold::TypeFoldable<rustc_middle[ba2289ab3ae064d4]::ty::context::TyCtxt>>::try_fold_with::<rustc_hir_analysis[c1a678e360ecab51]::coherence::orphan::TyVarReplacer>
21: 0x7f8bf7cc6e75 - <rustc_hir_analysis[c1a678e360ecab51]::coherence::orphan::TyVarReplacer as rustc_type_ir[6247808700e50c64]::fold::TypeFolder<rustc_middle[ba2289ab3ae064d4]::ty::context::TyCtxt>>::fold_ty
22: 0x7f8bf6a655ee - rustc_hir_analysis[c1a678e360ecab51]::coherence::orphan::orphan_check
23: 0x7f8bf6a62581 - rustc_hir_analysis[c1a678e360ecab51]::coherence::orphan::orphan_check_impl
24: 0x7f8bf954e8c7 - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::orphan_check_impl::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>
25: 0x7f8bf956adc7 - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::VecCache<rustc_span[28a649581f99a5bd]::def_id::LocalDefId, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, true>
26: 0x7f8bf9c0a11d - rustc_query_impl[3625cc0592f96219]::query_impl::orphan_check_impl::get_query_incr::__rust_end_short_backtrace
27: 0x7f8bf9c06ac9 - rustc_hir_analysis[c1a678e360ecab51]::coherence::coherent_trait
28: 0x7f8bf9c0622f - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::coherent_trait::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>
29: 0x7f8bf956cd78 - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::DefIdCache<rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, true>
30: 0x7f8bf98c392f - rustc_query_impl[3625cc0592f96219]::query_impl::coherent_trait::get_query_incr::__rust_end_short_backtrace
31: 0x7f8bf9c8a24e - rustc_hir_analysis[c1a678e360ecab51]::check::wfcheck::check_item
32: 0x7f8bf9c85fbe - rustc_hir_analysis[c1a678e360ecab51]::check::wfcheck::check_well_formed
33: 0x7f8bf9c85efb - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>
34: 0x7f8bf9c82756 - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::VecCache<rustc_hir[22cf0f8ce801b62b]::hir_id::OwnerId, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, true>
35: 0x7f8bf9c821f7 - rustc_query_impl[3625cc0592f96219]::query_impl::check_well_formed::get_query_incr::__rust_end_short_backtrace
36: 0x7f8bf9c837dc - rustc_hir_analysis[c1a678e360ecab51]::check::wfcheck::check_mod_type_wf
37: 0x7f8bf9c83697 - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>
38: 0x7f8bf98c7b2f - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::DefaultCache<rustc_span[28a649581f99a5bd]::def_id::LocalModDefId, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, true>
39: 0x7f8bf98c76e4 - rustc_query_impl[3625cc0592f96219]::query_impl::check_mod_type_wf::get_query_incr::__rust_end_short_backtrace
40: 0x7f8bf92e8519 - rustc_hir_analysis[c1a678e360ecab51]::check_crate
41: 0x7f8bf98b8be6 - rustc_interface[53a414ae04dc6ffb]::passes::analysis
42: 0x7f8bf98b85e7 - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>
43: 0x7f8bfa14418b - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::SingleCache<rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, true>
44: 0x7f8bfa143e78 - rustc_query_impl[3625cc0592f96219]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
45: 0x7f8bf9dd4256 - rustc_interface[53a414ae04dc6ffb]::interface::run_compiler::<core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>, rustc_driver_impl[8683aa37472b7dde]::run_compiler::{closure#0}>::{closure#1}
46: 0x7f8bf9d1b95b - std[1f2242ed6435445e]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[53a414ae04dc6ffb]::util::run_in_thread_with_globals<rustc_interface[53a414ae04dc6ffb]::interface::run_compiler<core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>, rustc_driver_impl[8683aa37472b7dde]::run_compiler::{closure#0}>::{closure#1}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>
47: 0x7f8bf9d1b72a - <<std[1f2242ed6435445e]::thread::Builder>::spawn_unchecked_<rustc_interface[53a414ae04dc6ffb]::util::run_in_thread_with_globals<rustc_interface[53a414ae04dc6ffb]::interface::run_compiler<core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>, rustc_driver_impl[8683aa37472b7dde]::run_compiler::{closure#0}>::{closure#1}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>::{closure#1} as core[3cad2706d8bdcdc4]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
48: 0x7f8bf48415fb - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::ha1963004222e7822
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
49: 0x7f8bf48415fb - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h1086ced1f7c494c2
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
50: 0x7f8bf48415fb - std::sys::pal::unix::thread::Thread::new::thread_start::ha8af9c992ef0b208
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/pal/unix/thread.rs:108:17
51: 0x7f8bf4748ea7 - start_thread
52: 0x7f8bf4666acf - clone
53: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [orphan_check_impl] checking whether impl `<impl at src/main.rs:17:1: 19:22>` follows the orphan rules
#1 [coherent_trait] coherence checking all impls of trait `core::convert::From`
end of query stack
error: could not compile `rust_bug` (bin "rust_bug")
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_hir_analysis/src/coherence/orphan.rs:554:60:
no entry found for key
stack backtrace:
0: rust_begin_unwind
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:665:5
1: core::panicking::panic_fmt
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:74:14
2: core::panicking::panic_display
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:264:5
3: core::option::expect_failed
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/option.rs:2030:5
4: <rustc_hir_analysis::coherence::orphan::TyVarReplacer as rustc_type_ir::fold::TypeFolder<rustc_middle::ty::context::TyCtxt>>::fold_ty
5: <&rustc_middle::ty::list::RawList<(), rustc_middle::ty::generic_args::GenericArg> as rustc_type_ir::fold::TypeFoldable<rustc_middle::ty::context::TyCtxt>>::try_fold_with::<rustc_hir_analysis::coherence::orphan::TyVarReplacer>
6: <rustc_hir_analysis::coherence::orphan::TyVarReplacer as rustc_type_ir::fold::TypeFolder<rustc_middle::ty::context::TyCtxt>>::fold_ty
7: rustc_hir_analysis::coherence::orphan::orphan_check
8: rustc_hir_analysis::coherence::orphan::orphan_check_impl
[... omitted 1 frame ...]
9: rustc_hir_analysis::coherence::coherent_trait
[... omitted 1 frame ...]
10: rustc_hir_analysis::check::wfcheck::check_item
11: rustc_hir_analysis::check::wfcheck::check_well_formed
[... omitted 1 frame ...]
12: rustc_hir_analysis::check::wfcheck::check_mod_type_wf
[... omitted 1 frame ...]
13: rustc_hir_analysis::check_crate
14: rustc_interface::passes::analysis
[... omitted 1 frame ...]
15: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [orphan_check_impl] checking whether impl `<impl at src/main.rs:17:1: 19:22>` follows the orphan rules
#1 [coherent_trait] coherence checking all impls of trait `core::convert::From`
#2 [check_well_formed] checking that `<impl at src/main.rs:17:1: 19:22>` is well-formed
#3 [check_mod_type_wf] checking that types are well-formed in top-level module
#4 [analysis] running analysis passes on this crate
end of query stack
```
</p>
</details>
| I-ICE,T-compiler,C-bug,S-has-mcve,S-bug-has-test,A-coherence | low | Critical |
2,646,586,585 | godot | Evaluator has no access to autoload singletons | ### Tested versions
4.4 dev4
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description
When you try to access autoload singleton via evaluator, you will get error `Invalid named index 'AutoloadName' for base type Object`
The workaround is to access it by path `get_node(/root/AutoloadName` (if you are inside Node script)
### Steps to reproduce
1. Add autoload singleton
2. Run a script with breakpoint
3. Type the autoloat's name in the Debugger's Evaulator
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Critical |
2,646,591,852 | godot | Evaluator does not work without breakpoint | ### Tested versions
4.4 dev4
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description
When you try to use Evaluator without a breakpoint (by using Pause Running Project), you'll get `GDScriptLanguage::debug_get_stack_level_instance: Index p_level = -1 is out of bounds (_call_stack.stack_pos = 0).` error and the project will resume.
The error is expected (expected, i.e. there is really no stack trace), the resume part is some weird bug. But the evaluator should just stay disabled in this case. Yet you can write and execute expression.
### Steps to reproduce
1. Run project from editor
2. Pause it (F7)
3. Use Debugger's Evaluator
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Critical |
2,646,593,353 | godot | Built-in script live reloading is broken again | ### Tested versions
Broken in 4.4 dev4
Works in 4.4 dev3
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description
#94012 implemented live reloading for built-in scrpipts. But it's not working in 4.4 dev4. Modifying and saving a built-in script while the project is running will result in
```
E GDScriptLanguage::reload_scripts: Condition "fresh.is_null()" is true. Continuing.
<C++ Source> modules\gdscript\gdscript.cpp:2668 @ GDScriptLanguage::reload_scripts()
```
and the runtime script is unchanged.
### Steps to reproduce
1. Add a node with built-in script (which for example prints something in `_process()`
2. Run the project
3. Modify the script and save (e.g. change what it's printing)
4. The script is not reflected in the running project
5. Also error
### Minimal reproduction project (MRP)
N/A | bug,topic:gdscript,topic:editor,regression | low | Critical |
2,646,621,876 | next.js | Parallel Intercepted routes not working when accessed from 404 error pages | ### Link to the code that reproduces this issue
https://github.com/itsjavi/nextjs-demos/tree/issue/parallel-intercepted-404
### To Reproduce
1. Add a parallel intercepted route [like in this example](https://github.com/itsjavi/nextjs-demos/tree/issue/parallel-intercepted-404/src/app)
3. Add the following to your app:
- error.tsx page
- support for the `@modal` slot in your layout.tsx props, and put it inside the body.
- a Link in your layout.tsx, linking to the intercepted route
7. Navigate to any non-existing route to trigger a 404 error page
8. Click to the link of the layout that would trigger the intercepted route
### Current vs. Expected behavior
### Current behavior
#### On `next dev`
When accessing Parallel Intercepted routes from a 404 page, it triggers the following client-side error:
```
UI:
Application error: a client-side exception has occurred (see the browser console for more information).
Console:
app-router.ts:52 TypeError: initialTree is not iterable
at applyPatch (apply-router-state-patch-to-tree.ts:17:51)
at applyRouterStatePatchToTree (apply-router-state-patch-to-tree.ts:107:26)
at navigate-reducer.ts:208:50
The above error occurred in the <Router> component. It was handled by the <ErrorBoundaryHandler> error boundary.
```
#### On `next start`
On `next start`, after build, navigating to the intercepted route from a 404 page works, but calling router.back() will trigger the same error again.
### Expected behavior
- The intercepted route content is shown on 404 pages, when navigating to that route.
- router.back() works as expected in all the above scenarios.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.18.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: 9.12.2
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
- The Parallel Intercepted routes work fine from error.tsx pages.
- Repository to reproduce the issue: https://github.com/itsjavi/nextjs-demos/tree/issue/parallel-intercepted-404
- PR with the problematic changes: https://github.com/itsjavi/nextjs-demos/pull/1/files
CC @feedthejim
Related issue: https://github.com/vercel/next.js/issues/48289 | bug,Parallel & Intercepting Routes | low | Critical |
2,646,623,333 | godot | Opening File in Script Editor Overwrites Contents of Current Tab and Doesn't Change the Tab Label | ### Tested versions
Reproducible in: v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Ubuntu 20.04.6 LTS (Focal Fossa) - X11 - Vulkan (Forward+) - dedicated Quadro P600 (nvidia; 470.256.02) - Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz (8 Threads)
### Issue description
With a file currently open in the active script editor tab, opening another file (through the FileSystem Panel or the File->Open menu option in the script editor) overwrites the contents of the current file in the active tab and does not update the tab label. I expected the operation to open the second file in a new tab with the correct label, and not overwrite the contents of the previously opened file in the old tab.
### Steps to reproduce
Open any project with at least 2 script files (say a.gd and b.gd ). Assume the script editor shows the code from the a.gd file with a label of 'a'. In the editor panel, open the b.gd file in the script editor via the File->Open menu selection. No new tab opens. Instead the contents of the 'a' tab gets overwritten with the contents of the b.gd file.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,646,644,848 | godot | physical_keycode differs between X11 and wayland | ### Tested versions
Reproducible in:
- v4.3.stable.official [77dcf97d8]
- v4.4.dev4.official [36e6207bb]
### System information
Godot v4.4.dev4 - Arch Linux #1 SMP PREEMPT_DYNAMIC Fri, 01 Nov 2024 03:30:41 +0000 on Wayland - X11 display driver, Multi-window, 3 monitors - Vulkan (Mobile) - dedicated AMD Radeon RX 6900 XT (RADV NAVI21) - AMD Ryzen 9 3950X 16-Core Processor (32 threads)
### Issue description
On a swiss german keyboard layout, the physical_keycode of the ยง and < keys are swapped on Wayland compared to X11. keycode and key_label are the same. The keys ร and ยจ have different physical_keycode and keycode, but the same key_label. I tested a few other ISO layouts and they had the same issue.
| Keyboard | X11 physical_keycode | Wayland physical_keycode | X11 keycode | Wayland keycode | X11 key_label | Wayland key_label |
| --------------- | -------------------- | ------------------------ | ----------- | --------------- | ------------- | ----------------- |
| ยง (below Esc) | 96 | 167 | 167 | 167 | 167 | 167 |
| < (left of Y) | 167 | 96 | 60 | 60 | 60 | 60 |
| ร (right of P) | 91 | 123 | 91 | 123 | 220 | 220 |
| ยจ (right of ร) | 93 | 125 | 93 | 125 | 93 | 125 |
### Steps to reproduce
You need a keyboard with a physical ISO layout. JIS apparently works correctly (https://github.com/godotengine/godot/issues/99008#issuecomment-2569627474). No idea about ANSI.
Open the attached MRP and set `display/display_server/driver.linuxbsd` to `x11` or `wayland`. Start the project, press the keys from the table above and check the values in the output tab in the editor.
### Minimal reproduction project (MRP)
[bug-physical_key.zip](https://github.com/user-attachments/files/17689638/bug-physical_key.zip)
| bug,platform:linuxbsd,topic:porting,topic:input | low | Critical |
2,646,667,307 | stable-diffusion-webui | [Bug]: --dump-sysinfo command line argument does not work | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Starting WebUI with the command line argument --dump-sysinfo causes a crash soon after starting.
### Steps to reproduce the problem
Start WebUI with the command line argument --dump-sysinfo
### What should have happened?
The --dump-sysinfo option should dump sysinfo.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-11-09-23-37.json](https://github.com/user-attachments/files/17689692/sysinfo-2024-11-09-23-37.json)
### Console logs
```Shell
venv "E:\ai_stuff\stable-diffusion-webui\venv\Scripts\Python.exe"
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
E:\ai_stuff\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Traceback (most recent call last):
File "E:\ai_stuff\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "E:\ai_stuff\stable-diffusion-webui\launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "E:\ai_stuff\stable-diffusion-webui\modules\launch_utils.py", line 476, in dump_sysinfo
text = sysinfo.get()
File "E:\ai_stuff\stable-diffusion-webui\modules\sysinfo.py", line 46, in get
res = get_dict()
File "E:\ai_stuff\stable-diffusion-webui\modules\sysinfo.py", line 119, in get_dict
"Extensions": get_extensions(enabled=True, fallback_disabled_extensions=config.get('disabled_extensions', [])),
AttributeError: 'str' object has no attribute 'get'
Press any key to continue . . .
```
### Additional information
_No response_ | bug-report | low | Critical |
2,646,671,163 | godot | Invoking .callv() when doing so would cause an error instead fails silently | ### Tested versions
- Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility)
### Issue description
Invoking .callv() in a situation that would normally produce an (invoke-related) error instead fails silently.
Qualifying errors include but are maybe not limited to:
- `Invalid type in function`
- `Invalid call to function`
### Steps to reproduce
1. Execute the following code:
```gdscript
func _ready() -> void:
print_int.callv(["5"]) # Fails silently. No error.
self.callv("print_int", ["5"]) # Fails silently. No error.
print_int.call("5") # Errors correctly: Invalid type in function '...'. Cannot convert argument 1 from String to int.
func print_int(number : int) -> void:
print(number)
```
2. Observe that `.callv()` does not correctly result in an error
### Minimal reproduction project (MRP)
N/A | bug,topic:core | low | Critical |
2,646,672,207 | deno | vite-plugin-checker invokes node: /usr/bin/env: 'node': No such file or directory | Version: Deno 2.0.5
Using vite with `vite-plugin-checker` causes it to invoke Node, and the build fails:
```
/usr/bin/env: 'node': No such file or directory
```
Normally Deno is supposed to override calls to `node` to `deno`, but it's failing to do so here.
Here is a reproduction Dockerfile:
```dockerfile
FROM denoland/deno:2.0.5
WORKDIR /app
RUN deno run -A npm:create-vite@latest -t react-ts .
RUN deno install
RUN deno add npm:vite-plugin-checker
RUN cat <<EOF > vite.config.ts
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import checker from 'vite-plugin-checker'
export default defineConfig({
plugins: [react(), checker({ typescript: true })],
})
EOF
RUN deno run -A npm:vite build
```
The dockerfile does not build:
```
โ vite-deno-repro docker build -t yolo .
[+] Building 1.9s (10/10) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 461B 0.0s
=> [internal] load metadata for docker.io/denoland/deno:2.0.5 0.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/7] FROM docker.io/denoland/deno:2.0.5@sha256:508bb46f0bb996c60498ddf094324078f05021b08854a3e308dcc1d5a3e36b38 0.0s
=> CACHED [2/7] WORKDIR /app 0.0s
=> CACHED [3/7] RUN deno run -A npm:create-vite@latest -t react-ts . 0.0s
=> CACHED [4/7] RUN deno install 0.0s
=> CACHED [5/7] RUN deno add npm:vite-plugin-checker 0.0s
=> CACHED [6/7] RUN cat <<EOF > vite.config.ts 0.0s
=> ERROR [7/7] RUN deno run -A npm:vite build 1.7s
------
> [7/7] RUN deno run -A npm:vite build:
1.526 vite v5.4.10 building for production...
1.604 /usr/bin/env: 'node': No such file or directory
------
Dockerfile:20
--------------------
18 | EOF
19 |
20 | >>> RUN deno run -A npm:vite build
21 |
--------------------
ERROR: failed to solve: process "/bin/sh -c deno run -A npm:vite build" did not complete successfully: exit code: 127
``` | bug,node compat | low | Critical |
2,646,678,581 | node | Child process breaks on command | ### Version
23.1.0
### Platform
```text
* fails on Debian 12 as a regular user with statically specified ports
* fails on Windows 10 as a regular user with statically specified ports
* does not fail Debian 12 via systemd (as an OS service run as root)
* does not fail if the ports are randomly assigned by specifying port 0 on server invocation
```
### Subsystem
_No response_
### What steps will reproduce the bug?
1. execute a child process with command `nmap --open 127.0.0.1` via instruction from any kind of network stream, such as WebSocket message or HTTP request.
```typescript
const port_map = function (callback:() => void):void {
const command:string = "nmap --open 127.0.0.1";
node.child_process.exec(command, function (error:node_childProcess_ExecException, stdout:string):void {
if (callback !== null) {
callback();
}
});
};
export default port_map;
```
### How often does it reproduce? Is there a required condition?
I can reproduce this 100% from command line in both Debian 12 and Windows 10. 0% from systemd.
### What is the expected behavior? Why is that the expected behavior?
child_process exec calls a callback and child_process spawn calls event handlers in accordance with Node API definitions.
### What do you see instead?
The application crashes fatally if the error event on the corresponding socket is not trapped immediately at first connection time before other instructions are executed. Assigning a handler to the socket's error event listener later is insufficient, for example assigning the handler from within the following code still results in crashes:
```typescript
socket.once("data", handler);
```
The actual error reported by the socket is:
```typescript
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:216:20) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
```
The error is the direct result of a child process execution. The corresponding child process executes in response to instructions transmitted on the corresponding socket, but is otherwise not related or associated to the socket.
This following error is the fatal error messaging if the error is not immediately trapped on the corresponding socket.
```
node:events:485
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:216:20)
Emitted 'error' event on Socket instance at:
at emitErrorNT (node:internal/streams/destroy:170:8)
at emitErrorCloseNT (node:internal/streams/destroy:129:3)
at process.processTicksAndRejections (node:internal/process/task_queues:90:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
```
The error messaging describes a socket failure, but it is not. It is a child process call only and only on the command specified.
### Additional information
Some of the things I have tried:
1. I have tried a few other trivial commands like `echo hello` and `ps -a`. They work without issue.
2. In exec have tried specifying different shells `/bin/sh` and `/bin/bash`
3. I have tried to call the child process with exec and spawn
4. I have tried to call the child process in different ways: manually, recursively, setTimeout, from automated response to a socket message
5. I validated that all sockets connected, both http and ws, have assigned event handlers for the error event and I have also isolated all sockets from the problematic behavior. | needs more info | low | Critical |
2,646,683,791 | flutter | Add backgroundColor property for children in ExpansionTile | ### Use case
### Description:
It would be helpful if the ExpansionTile widget could have a backgroundColor property specifically for its child elements. Currently, ExpansionTile provides a backgroundColor property that applies to the header area, but thereโs no way to set a separate background color for the expanded children section. Adding this feature would enable more customization options for developers, allowing us to differentiate the background color of the children from the tile header.
### Proposal
- Introduce a new property, e.g., childrenBackgroundColor, in the ExpansionTile widget.
- This property should control the background color of the expanded children section only, leaving the header background color unaffected.
- Ensure that the property is compatible with both light and dark themes and can be set independently of the backgroundColor for the header. | c: new feature,framework,f: material design,waiting for PR to land (fixed),c: proposal,P3,team-design,triaged-design | low | Minor |
2,646,686,843 | PowerToys | Cant type in power toys run | ### Microsoft PowerToys version
0.86.0
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
I'm unsure how to trigger this, but I Frequently end up being able to enter text into power toys run after it launches. i've tried switching between admin or not, but have been unable to debug the root cause. Would appreciate help debugging.
[PowerToysReport_2024-11-09-18-14-33.zip](https://github.com/user-attachments/files/17689761/PowerToysReport_2024-11-09-18-14-33.zip)h
### โ๏ธ Expected Behavior
Can launch power toys run and type
### โ Actual Behavior
PowerToys run launched, but I was unable to to type into the launcher
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,646,688,507 | langchain | The Qdrant Vector Store, for filtering is not using the right imports. | ### URL
https://python.langchain.com/docs/integrations/vectorstores/qdrant/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The code https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/qdrant.ipynb
```python
from qdrant_client.http import models
results = vector_store.similarity_search(
query="Who are the best soccer players in the world?",
k=1,
filter=models.Filter(
should=[
models.FieldCondition(
key="page_content",
match=models.MatchValue(
value="The top 10 soccer players in the world right now."
),
),
]
),
)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
```
Uses a wrong import the right is:
from qdrant_client import models
```python
from qdrant_client import models
results = vector_store.similarity_search(
query="Who are the best soccer players in the world?",
k=1,
filter=models.Filter(
should=[
models.FieldCondition(
key="page_content",
match=models.MatchValue(
value="The top 10 soccer players in the world right now."
),
),
]
),
)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
```
Source:
https://qdrant.tech/documentation/concepts/filtering/
### Idea or request for content:
Please update the import since I changed with the one used in the Qdrant documentation the metadata filtering started working fine.
Source File: https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/qdrant.ipynb | ๐ค:docs | low | Minor |
2,646,712,113 | pytorch | cpp_wrapper and triton.debug_sync_graph errors out | ### ๐ Describe the bug
Enabling both these options causes an error.
Code:
```
import torch
batch_size = 32
seq_length = 50
hidden_size = 768
def test_fn():
inp = torch.randn(batch_size, seq_length, hidden_size, device="cuda")
weight = torch.randn(hidden_size, hidden_size, device="cuda")
matmul_output = inp @ weight
final_output = torch.nn.LayerNorm(hidden_size, device="cuda")(matmul_output)
return final_output
comp = torch.compile(options={"cpp_wrapper": True, "triton.debug_sync_graph": True, "force_disable_caches": True})(test_fn)
print(comp())
```
### Error logs
```
/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py:222: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
W1109 17:42:38.167000 2481914 torch/_inductor/utils.py:847] [0/0_1] on error, temporary cache dir kept at /tmp/torchinductor_gabeferns/tmp0hvnbkyj
Traceback (most recent call last):
File "/home/gabeferns/org/debug/config-fuzzer-1/repro2.py", line 13, in <module>
print(comp())
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 1447, in __call__
return self._torchdynamo_orig_callable(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 1232, in __call__
result = self._inner_convert(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 550, in __call__
return _compile(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 979, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 709, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/gabeferns/pt-envs/applications/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 744, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 234, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 663, in transform
tracer.run()
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 2914, in run
super().run()
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 638, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 679, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1110, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1346, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1395, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1444, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1425, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/gabeferns/pt-envs/applications/torch/__init__.py", line 2300, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1427, in compile_fx
return compile_fx(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1469, in compile_fx
return compile_fx(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1707, in compile_fx
return aot_autograd(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 1103, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 1079, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 527, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 778, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/gabeferns/pt-envs/applications/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 197, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1524, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1593, in _fw_compiler_base
return inner_compile(
File "/home/gabeferns/.conda/envs/applications/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 587, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 754, in _compile_fx_inner
compiled_graph = codegen_and_compile(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 651, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 962, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/graph.py", line 2034, in compile_to_fn
return self.compile_to_module().call
File "/home/gabeferns/pt-envs/applications/torch/_inductor/graph.py", line 1954, in compile_to_module
return self._compile_to_module()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/graph.py", line 1988, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 3057, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/runtime/compile_tasks.py", line 45, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_gabeferns/tmp0hvnbkyj/yf/cyfwonzzivczmccqzwwzu6wl4ydh6g57mdsv5lt5e4bvso343udp.py", line 218, in <module>
inductor_entry = CppWrapperCodeCache.load_pybinding(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2562, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2554, in future
result = get_result()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2345, in load_fn
result = worker_fn()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2385, in _worker_compile_cpp
cpp_builder.build()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/cpp_builder.py", line 1528, in build
status = run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/cpp_builder.py", line 350, in run_compile_cmd
return _run_compile_cmd(cmd_line, cwd)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/cpp_builder.py", line 344, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_gabeferns/tmp0hvnbkyj/hd/chd2ekzzlyl5jk3nvj2uhwi2qfe35pyhosnvi6dnuqv7inew23ir.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX512 -D USE_CUDA -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/home/gabeferns/.conda/envs/applications/include/python3.10 -I/home/gabeferns/.conda/envs/applications/include/python3.10 -I/home/gabeferns/pt-envs/applications/torch/include -I/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/api/include -I/home/gabeferns/pt-envs/applications/torch/include/TH -I/home/gabeferns/pt-envs/applications/torch/include/THC -I/home/gabeferns/pt-envs/applications/torch/include -I/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/api/include -I/home/gabeferns/pt-envs/applications/torch/include/TH -I/home/gabeferns/pt-envs/applications/torch/include/THC -I/usr/local/cuda-12.4/include -mavx512f -mavx512dq -mavx512vl -mavx512bw -mfma -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -lc10_cuda -lcuda -ltorch_cuda -L/home/gabeferns/.conda/envs/applications/lib -L/home/gabeferns/pt-envs/applications/torch/lib -L/home/gabeferns/pt-envs/applications/torch/lib -L/usr/local/cuda-12.4/lib64 -o /tmp/torchinductor_gabeferns/tmp0hvnbkyj/hd/chd2ekzzlyl5jk3nvj2uhwi2qfe35pyhosnvi6dnuqv7inew23ir.so
Output:
/tmp/torchinductor_gabeferns/tmp0hvnbkyj/hd/chd2ekzzlyl5jk3nvj2uhwi2qfe35pyhosnvi6dnuqv7inew23ir.cpp:135:131: warning: integer constant is so large that it is unsigned
135 | AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cuda_randint_low_out(buf0, -9223372036854775808L, 9223372036854775807L, var_array_0, 1));
| ^
/tmp/torchinductor_gabeferns/tmp0hvnbkyj/hd/chd2ekzzlyl5jk3nvj2uhwi2qfe35pyhosnvi6dnuqv7inew23ir.cpp: In function โvoid inductor_entry_impl(AtenTensorOpaque**, AtenTensorOpaque**)โ:
/tmp/torchinductor_gabeferns/tmp0hvnbkyj/hd/chd2ekzzlyl5jk3nvj2uhwi2qfe35pyhosnvi6dnuqv7inew23ir.cpp:190:10: error: expected primary-expression before โ.โ token
190 | torch.cuda.synchronize()
| ^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git13eb3b3
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 46
On-line CPU(s) list: 0-45
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 46
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.9 MiB (46 instances)
L1i cache: 2.9 MiB (46 instances)
L2 cache: 23 MiB (46 instances)
L3 cache: 736 MiB (46 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-45
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.6.0a0+git13eb3b3
[pip3] triton==3.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.6.0a0+git13eb3b3 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,646,713,280 | pytorch | cpp_wrapper and triton.debug_sync_kernel errors out | ### ๐ Describe the bug
Similar to https://github.com/pytorch/pytorch/issues/140219 these options also fail.
Code:
```
import torch
batch_size = 32
seq_length = 50
hidden_size = 768
def test_fn():
inp = torch.randn(batch_size, seq_length, hidden_size, device="cuda")
weight = torch.randn(hidden_size, hidden_size, device="cuda")
matmul_output = inp @ weight
final_output = torch.nn.LayerNorm(hidden_size, device="cuda")(matmul_output)
return final_output
comp = torch.compile(options={"cpp_wrapper": True, "triton.debug_sync_kernel": True, "force_disable_caches": True})(test_fn)
print(comp())
```
### Error logs
```
/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py:222: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
W1109 17:50:43.704000 2516607 torch/_inductor/utils.py:847] [0/0_1] on error, temporary cache dir kept at /tmp/torchinductor_gabeferns/tmp2x69j_al
Traceback (most recent call last):
File "/home/gabeferns/org/debug/config-fuzzer-1/repro2.py", line 13, in <module>
print(comp())
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 1447, in __call__
return self._torchdynamo_orig_callable(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 1232, in __call__
result = self._inner_convert(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 550, in __call__
return _compile(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 979, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 709, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/gabeferns/pt-envs/applications/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 744, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 234, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/convert_frame.py", line 663, in transform
tracer.run()
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 2914, in run
super().run()
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 638, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/symbolic_convert.py", line 679, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1110, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1346, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1395, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1444, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/output_graph.py", line 1425, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/gabeferns/pt-envs/applications/torch/__init__.py", line 2300, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1427, in compile_fx
return compile_fx(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1469, in compile_fx
return compile_fx(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1707, in compile_fx
return aot_autograd(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 1103, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 1079, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 527, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/gabeferns/pt-envs/applications/torch/_functorch/aot_autograd.py", line 778, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/gabeferns/pt-envs/applications/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 197, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1524, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 1593, in _fw_compiler_base
return inner_compile(
File "/home/gabeferns/.conda/envs/applications/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 587, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/gabeferns/pt-envs/applications/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 754, in _compile_fx_inner
compiled_graph = codegen_and_compile(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 651, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/compile_fx.py", line 962, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/graph.py", line 2034, in compile_to_fn
return self.compile_to_module().call
File "/home/gabeferns/pt-envs/applications/torch/_inductor/graph.py", line 1954, in compile_to_module
return self._compile_to_module()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/graph.py", line 1988, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 3057, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/runtime/compile_tasks.py", line 45, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_gabeferns/tmp2x69j_al/wr/cwrrzoj2tdotjlwmc6zqlcw5xxv6ghfwuwhz2rdvnwbuijycrfux.py", line 221, in <module>
inductor_entry = CppWrapperCodeCache.load_pybinding(
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2562, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2554, in future
result = get_result()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2345, in load_fn
result = worker_fn()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/codecache.py", line 2385, in _worker_compile_cpp
cpp_builder.build()
File "/home/gabeferns/pt-envs/applications/torch/_inductor/cpp_builder.py", line 1528, in build
status = run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/cpp_builder.py", line 350, in run_compile_cmd
return _run_compile_cmd(cmd_line, cwd)
File "/home/gabeferns/pt-envs/applications/torch/_inductor/cpp_builder.py", line 344, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX512 -D USE_CUDA -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/home/gabeferns/.conda/envs/applications/include/python3.10 -I/home/gabeferns/.conda/envs/applications/include/python3.10 -I/home/gabeferns/pt-envs/applications/torch/include -I/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/api/include -I/home/gabeferns/pt-envs/applications/torch/include/TH -I/home/gabeferns/pt-envs/applications/torch/include/THC -I/home/gabeferns/pt-envs/applications/torch/include -I/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/api/include -I/home/gabeferns/pt-envs/applications/torch/include/TH -I/home/gabeferns/pt-envs/applications/torch/include/THC -I/usr/local/cuda-12.4/include -mavx512f -mavx512dq -mavx512vl -mavx512bw -mfma -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -lc10_cuda -lcuda -ltorch_cuda -L/home/gabeferns/.conda/envs/applications/lib -L/home/gabeferns/pt-envs/applications/torch/lib -L/home/gabeferns/pt-envs/applications/torch/lib -L/usr/local/cuda-12.4/lib64 -o /tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.so
Output:
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:135:131: warning: integer constant is so large that it is unsigned
135 | AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cuda_randint_low_out(buf0, -9223372036854775808L, 9223372036854775807L, var_array_0, 1));
| ^
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp: In function โvoid inductor_entry_impl(AtenTensorOpaque**, AtenTensorOpaque**)โ:
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:136:10: error: expected primary-expression before โ.โ token
136 | torch.cuda.synchronize()
| ^
In file included from /tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:31:
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:140:61: error: โint_array_4โ was not declared in this scope; did you mean โint_array_5โ?
140 | AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_empty_strided(3, int_array_4, int_array_5, cached_torch_dtype_float32, cached_torch_device_type_cuda, 0, &buf1_handle));
| ^~~~~~~~~~~
/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/inductor/aoti_runtime/utils.h:34:8: note: in definition of macro โAOTI_TORCH_ERROR_CODE_CHECKโ
34 | if ((call) != AOTI_TORCH_SUCCESS) { \
| ^~~~
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:159:10: error: expected primary-expression before โ.โ token
159 | torch.cuda.synchronize()
| ^
In file included from /tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:31:
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:163:61: error: โint_array_6โ was not declared in this scope; did you mean โint_array_7โ?
163 | AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_empty_strided(2, int_array_6, int_array_7, cached_torch_dtype_float32, cached_torch_device_type_cuda, 0, &buf2_handle));
| ^~~~~~~~~~~
/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/inductor/aoti_runtime/utils.h:34:8: note: in definition of macro โAOTI_TORCH_ERROR_CODE_CHECKโ
34 | if ((call) != AOTI_TORCH_SUCCESS) { \
| ^~~~
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:180:10: error: expected primary-expression before โ.โ token
180 | torch.cuda.synchronize()
| ^
In file included from /tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:31:
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:183:61: error: โint_array_8โ was not declared in this scope; did you mean โint_array_7โ?
183 | AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_empty_strided(2, int_array_8, int_array_7, cached_torch_dtype_float32, cached_torch_device_type_cuda, 0, &buf3_handle));
| ^~~~~~~~~~~
/home/gabeferns/pt-envs/applications/torch/include/torch/csrc/inductor/aoti_runtime/utils.h:34:8: note: in definition of macro โAOTI_TORCH_ERROR_CODE_CHECKโ
34 | if ((call) != AOTI_TORCH_SUCCESS) { \
| ^~~~
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:192:10: error: expected primary-expression before โ.โ token
192 | torch.cuda.synchronize()
| ^
/tmp/torchinductor_gabeferns/tmp2x69j_al/kl/ckl476gfctiaop3746j24yrqisr24wgiba7mlpvu7qq6q5jjkzjd.cpp:194:57: error: โtmp_tensor_handle_1โ was not declared in this scope; did you mean โtmp_tensor_handle_0โ?
194 | output_handles[0] = wrap_with_raii_handle_if_needed(tmp_tensor_handle_1).release();
| ^~~~~~~~~~~~~~~~~~~
| tmp_tensor_handle_0
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git13eb3b3
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 46
On-line CPU(s) list: 0-45
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 46
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.9 MiB (46 instances)
L1i cache: 2.9 MiB (46 instances)
L2 cache: 23 MiB (46 instances)
L3 cache: 736 MiB (46 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-45
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.6.0a0+git13eb3b3
[pip3] triton==3.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.6.0a0+git13eb3b3 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,646,725,171 | PowerToys | Keyboard Manager apps filter | ### Description of the new feature / enhancement
I suggest applying application filters for the Keyboard manager function.
Sometimes I switch to other apps (like games) that don't expect to use keyboard remap
### Scenario when this would be used?
Keyboard manager
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,646,727,495 | react | [Compiler Bug]: "InvalidReact: Mutating component props or hook arguments is not allowed" When Passing a MutableRef as a prop, then mutating it in child | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgBMEAzAQygBsCSoA7OXASwjvwAUyYE7cBhCAFtsdHrgAUASnzAAOm3xxWYAhjLDKCAEql8AXnxQw20uIDkAOQhnJ8+fnzdcsNuPsP8AHgB87j174ACyZKQgFhVjEZADpY4DUNExIAX2T8AHpfBX9PQiYAN3xWPkomOABrPWApPW85bP9FZQhNaMoIAHNzAAkyfIR8IJDCQ2MAURISBEYwfAAjBB5HegB+MwAafATMTR0SaLhYbl5bBodUjKyczPdJAG55ZLs6WgZmVkHg0PCRMXF49DqHZJZLSeoOIwICZTRjiKT6bwyPxbQGJPYHI5RAxmACaCDAZncyU2AG1trtSABdU7uJwuLx5fI3OhPOggZJAA
### Repro steps
1. Instantiate a ref in the parent
2. Mutate the current property in a useEffect in the child
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
19.0.0-beta-63b359f-20241101 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,646,752,889 | react | [Compiler Bug]: "Mutating a value returned from 'useContext()', which should not be mutated" When the value is a Ref | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAjggDswCBZATwGETcMCBeQmBAQ3tuPvVwApgAHWKCebALaYANggBKCAGbJ8wOLFbdlxKFKkBfYXoCUw4Rmx58AE0VsdBBVGJxcASxL4ACmw25akkgRuPiMVYXxCEjJ8DAlpOUV8ZigwBIU+AHIAOQgMkxFiCNZcWEK+cIj8AB4APgrK6qouHlwAOk8YCAA3VxsYfC62KSgERmBgWMkZeQU9PTrChsqq6gALVykrf2xiIIIAegWl6v2mugZ2zp6+o+XDiqMAbkNTYkdnNw81ja2IAN3gqEhIsiKQCMAYug4tNEnokvgUghmgw+GduAx8hVEQBRBQKBAuPghJI1MKLCKTeIzVpqGC+eEZACaCDAGQqegANPgANqUmEKAC6+QiFWKpWqVlcXXuxAMxBAeiAA
### Repro steps
1. Instantiate a Context
2. In Parent, Instantiate a ref
3. Wrap the child in the Context's Provider, and pass the ref as a value
4. Get the ref in the child, and mutate it in a useEffect
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
19.0.0-beta-63b359f-20241101 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,646,757,572 | neovim | Feature: Make `priority` affect `line_hl_group` | ### Problem
At the moment `line_hl_group` affects extmarks using `hl_group`(gets rid of the highlight groups) but doesn't affect extmarks using `virt_text`(doesn't combine background color).

<sup>`label` text loses it's highlight group color despite having a higher priority value. The surrounding virtual text doesn't combine the background color even with `hl_mode="combine"`</sup>
So, I think it would be nice if we could add extmarks on top of `line_hl_group` via `priority` so they don't get overriden by it.
### Expected behavior
Extmarks with a higher priority than `line_hl_group` will be added **on top** of the highlight group of the line.
This way it would be possible to highlight a region of lines and have virtual texts & specific highlight groups be visible. | enhancement,marks | low | Minor |
2,646,826,066 | flutter | Setting an offset for the `Tooltip` can cause the `GestureDetector`'s tap event to become ineffective. | ### Steps to reproduce
I suspect that the offset is only applied to the view part, and the mask part is not handled, which is causing the issue.
```dart
Tooltip(
message: widget.tooltip ?? '',
// After adding these three lines, the `GestureDetector`'s tap event becomes ineffective.
// preferBelow: false,
// verticalOffset: -13,
// margin: const EdgeInsets.only(left: 30),
child: GestureDetector(onTap: widget.onPressed,)
)
```
### Expected results
tap event effective
### Actual results
tap event ineffective
### Code sample
```dart
Tooltip(
message: widget.tooltip ?? '',
// After adding these three lines, the `GestureDetector`'s tap event becomes ineffective.
// preferBelow: false,
// verticalOffset: -13,
// margin: const EdgeInsets.only(left: 30),
child: GestureDetector(onTap: widget.onPressed,)
)
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
...
| framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Minor |
2,646,832,124 | PowerToys | Advances Paste Settings, Typo/Truncated words | ### Microsoft PowerToys version
0.86.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Advanced Paste, Settings
### Steps to reproduce
In the settings for Advanced Paste, some of the words are cut off. See screenshot.

Ok so this seemed like a #good first issue, so I decided to try to fix it... problem is _I don't know really any coding_, but I thought I would give it a try, I even tried to get GitHub copilot to help me, but all I got was the directory where I [think ](src/settings-ui/Settings.UI/SettingsXAML/Controls/ShortcutControl/ShortcutControl.xaml.cs) the changes need to be made (turn on word wrap or maybe a bigger dialog box or something like that...)
Well, I so badly wanted a "Thanks @technobulb" in the next release.....
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-Settings,Area-User Interface,Priority-3,Cost-Small | low | Minor |
2,646,879,276 | next.js | Cache tags are not respected if the route previously threw `notFound()` | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/j893gh
### To Reproduce
1. Create a page route that fetches some data using either a fetch or unstable cache. The route should return a 404 response or 200 based on the response from the fetch or unstable cache data..
2. Provide a way to invalidate the cache tag such as an api route and a button to call the route.
3. Run next server
4. Observe the 404 not-found page
5. attempt to invalidate the cache tag
6. observe the page does not re-generate.
### Current vs. Expected behavior
When using either a fetch with cache tags or the unstable_cache with cache tags, those cache tags can not be invalidated if the page eventually throws a `notFound()` function.
For example:
let's say the system is fetching from a CMS to gather page data. If the page in the CMS doesn't exist, it's expected to throw 404 response using the `notFound()`. This works as expected using appropriate logic.
Then if that page is later created in the CMS, the CMS should trigger the NextJS to invalidate the cache tag. But using `revalidateTags()` does not have any effect.
The alternative is to use `revalidatePath()`, but that would have cascading effects by invalidating more data than necessary such as fetch data in the layout. This then causes more ISR than is necessary just to get a 404 to resolve correctly.
Expected Behavior:
Regenerate the route if the cache tag for that route is invalidated.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.19
pnpm: 8.15.4
Relevant Packages:
next: 15.0.4-canary.4 // Latest available version is detected (15.0.4-canary.4).
eslint-config-next: 14.2.1
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next start (local), Vercel (Deployed)
### Additional context
Verified on the caranary release.
| examples | low | Minor |
2,646,885,922 | rust | Directly calling `"x86-interrupt" fn` should be invalid | I tried this code:
```rust
#![feature(abi_x86_interrupt)]
extern "x86-interrupt" fn lol() {
}
fn main() {
lol();
}
```
I expected to see rustc reject the erroneous code somewhere before final LLVM lowering, because the notion of "calling" an interrupt is nonsensical, no matter what its signature is.
Instead, this happened:
> rustc-LLVM ERROR: X86 interrupts may not be called directly
## Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (a0d98ff0e 2024-10-31)
binary: rustc
commit-hash: a0d98ff0e5b6e1f2c63fd26f68484792621b235c
commit-date: 2024-10-31
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
<details><summary>Backtrace</summary>
<p>
```
Exited with status 101
Standard Error
Compiling playground v0.0.1 (/playground)
rustc-LLVM ERROR: X86 interrupts may not be called directly
error: could not compile `playground` (bin "playground")
Caused by:
process didn't exit successfully: `/playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/rustc --crate-name playground --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C codegen-units=1 -C debuginfo=2 --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=22a47d3dce05edf1 -C extra-filename=-22a47d3dce05edf1 --out-dir /playground/target/debug/deps -L dependency=/playground/target/debug/deps --extern addr2line=/playground/target/debug/deps/libaddr2line-663db7c2f6740a7f.rlib --extern adler2=/playground/target/debug/deps/libadler2-76f194e4bb5c744f.rlib --extern ahash=/playground/target/debug/deps/libahash-ab6b3a5b01a72a4e.rlib --extern aho_corasick=/playground/target/debug/deps/libaho_corasick-9b605b5da44724d7.rlib --extern aligned_vec=/playground/target/debug/deps/libaligned_vec-4040c93a23c712d8.rlib --extern allocator_api2=/playground/target/debug/deps/liballocator_api2-e718a8c7e3ab25b4.rlib --extern ansi_term=/playground/target/debug/deps/libansi_term-05a0059bfefa5637.rlib --extern anstream=/playground/target/debug/deps/libanstream-95993ee254c24259.rlib --extern anstyle=/playground/target/debug/deps/libanstyle-1175f450b264c844.rlib --extern anstyle_parse=/playground/target/debug/deps/libanstyle_parse-38a2ba1dc35d9272.rlib --extern anstyle_query=/playground/target/debug/deps/libanstyle_query-3bbbaf47558023f2.rlib --extern anyhow=/playground/target/debug/deps/libanyhow-56cb7fb2063e54a2.rlib --extern approx=/playground/target/debug/deps/libapprox-0a22241dc960ef40.rlib --extern arc_swap=/playground/target/debug/deps/libarc_swap-63595da798af843e.rlib --extern arg_enum_proc_macro=/playground/target/debug/deps/libarg_enum_proc_macro-4811605b889bb4a6.so --extern arrayvec=/playground/target/debug/deps/libarrayvec-73d20a788d6d8b6d.rlib --extern async_trait=/playground/target/debug/deps/libasync_trait-3e0139b148322858.so --extern atomic=/playground/target/debug/deps/libatomic-1c9a51249ad751b6.rlib --extern atomic_waker=/playground/target/debug/deps/libatomic_waker-f4f217246ab9f6f1.rlib --extern autocfg=/playground/target/debug/deps/libautocfg-e6abe02a0a359e3c.rlib --extern av1_grain=/playground/target/debug/deps/libav1_grain-3b02e3b4fe96a17b.rlib --extern avif_serialize=/playground/target/debug/deps/libavif_serialize-1b7b3e48d7d5fd63.rlib --extern backtrace=/playground/target/debug/deps/libbacktrace-2f6d50da68435c3e.rlib --extern base64=/playground/target/debug/deps/libbase64-ee0043206bfc53cf.rlib --extern bit_set=/playground/target/debug/deps/libbit_set-8b78f1ed1a1d5a90.rlib --extern bit_vec=/playground/target/debug/deps/libbit_vec-2d247e0a8ee0b1fd.rlib --extern bit_field=/playground/target/debug/deps/libbit_field-59fcd3c19a5466e2.rlib --extern bitflags_1_3_2=/playground/target/debug/deps/libbitflags-ad1920bfb419c0bc.rlib --extern bitflags=/playground/target/debug/deps/libbitflags-adeefa1c078295b3.rlib --extern bitstream_io=/playground/target/debug/deps/libbitstream_io-ada3df3e636297f3.rlib --extern block_buffer=/playground/target/debug/deps/libblock_buffer-e97638cccbd2cab8.rlib --extern built=/playground/target/debug/deps/libbuilt-3f26f1850506936b.rlib --extern bumpalo=/playground/target/debug/deps/libbumpalo-a18e1280621168b7.rlib --extern bytemuck=/playground/target/debug/deps/libbytemuck-751571465beedffa.rlib --extern bytemuck_derive=/playground/target/debug/deps/libbytemuck_derive-697afc440704f60a.so --extern byteorder=/playground/target/debug/deps/libbyteorder-e159530ef32e7331.rlib --extern byteorder_lite=/playground/target/debug/deps/libbyteorder_lite-ac418e53dea7797e.rlib --extern bytes_0_4_12=/playground/target/debug/deps/libbytes-02e6555284aa3dd4.rlib --extern bytes=/playground/target/debug/deps/libbytes-e1c4efbc1774e0a3.rlib --extern cc=/playground/target/debug/deps/libcc-c1cced5169aa14a4.rlib --extern cfg_if=/playground/target/debug/deps/libcfg_if-59fd35abb8bea8be.rlib --extern chrono=/playground/target/debug/deps/libchrono-bd8c1ad24d4d54d4.rlib --extern clap=/playground/target/debug/deps/libclap-aeb7fb2b02404c9b.rlib --extern clap_builder=/playground/target/debug/deps/libclap_builder-222b210fb5320507.rlib --extern clap_derive=/playground/target/debug/deps/libclap_derive-81948252a8340851.so --extern clap_lex=/playground/target/debug/deps/libclap_lex-6754ed5bce72d9fc.rlib --extern color_quant=/playground/target/debug/deps/libcolor_quant-e7403b9c068f2d68.rlib --extern colorchoice=/playground/target/debug/deps/libcolorchoice-49c1697c3439c7d6.rlib --extern const_default=/playground/target/debug/deps/libconst_default-8efd432a44ffdd2e.rlib --extern cookie=/playground/target/debug/deps/libcookie-d5f251f310a336a2.rlib --extern cookie_store=/playground/target/debug/deps/libcookie_store-d618f9dc2e25434a.rlib --extern cpufeatures=/playground/target/debug/deps/libcpufeatures-0badef4d09ae9679.rlib --extern crc32fast=/playground/target/debug/deps/libcrc32fast-529b5b94aea41cde.rlib --extern crossbeam=/playground/target/debug/deps/libcrossbeam-65e35073ea1e99c3.rlib --extern crossbeam_channel=/playground/target/debug/deps/libcrossbeam_channel-55344e4a07623b8f.rlib --extern crossbeam_deque=/playground/target/debug/deps/libcrossbeam_deque-c336f8b8dce477aa.rlib --extern crossbeam_epoch=/playground/target/debug/deps/libcrossbeam_epoch-944a52cf02e5b9d2.rlib --extern crossbeam_queue=/playground/target/debug/deps/libcrossbeam_queue-ea6ea8de998ecb33.rlib --extern crossbeam_utils=/playground/target/debug/deps/libcrossbeam_utils-69cc502790cce516.rlib --extern crypto_common=/playground/target/debug/deps/libcrypto_common-8548fa2219442650.rlib --extern cssparser=/playground/target/debug/deps/libcssparser-0a8ffbfeac6e2825.rlib --extern cssparser_macros=/playground/target/debug/deps/libcssparser_macros-23514e87f263410f.so --extern csv=/playground/target/debug/deps/libcsv-bd46638160e3fe3d.rlib --extern csv_core=/playground/target/debug/deps/libcsv_core-36a4f89ce5d8fdf0.rlib --extern data_encoding=/playground/target/debug/deps/libdata_encoding-76212c157d552f2c.rlib --extern deranged=/playground/target/debug/deps/libderanged-33479523b78ca0be.rlib --extern derivative=/playground/target/debug/deps/libderivative-5efbebdc947595dd.so --extern derive_more=/playground/target/debug/deps/libderive_more-62918b3f79cb9217.so --extern destructure_traitobject=/playground/target/debug/deps/libdestructure_traitobject-b080bf825f241b55.rlib --extern digest=/playground/target/debug/deps/libdigest-8826c4d4876f7bc6.rlib --extern displaydoc=/playground/target/debug/deps/libdisplaydoc-abcdbd24915f07bd.so --extern dtoa=/playground/target/debug/deps/libdtoa-a3e75f208e1fd1b7.rlib --extern dtoa_short=/playground/target/debug/deps/libdtoa_short-964aac2ca54e6afa.rlib --extern ego_tree=/playground/target/debug/deps/libego_tree-641dfcb0540d9314.rlib --extern either=/playground/target/debug/deps/libeither-13fff39ecf70776f.rlib --extern encoding_rs=/playground/target/debug/deps/libencoding_rs-05c27b99ac9cdabf.rlib --extern env_filter=/playground/target/debug/deps/libenv_filter-a5652200c6ea32b3.rlib --extern env_logger=/playground/target/debug/deps/libenv_logger-4220bac2110b4f92.rlib --extern equivalent=/playground/target/debug/deps/libequivalent-412bfd857fdec5e5.rlib --extern errno=/playground/target/debug/deps/liberrno-eb94825902562041.rlib --extern error_chain=/playground/target/debug/deps/liberror_chain-94ecfce774ae6f5c.rlib --extern exr=/playground/target/debug/deps/libexr-016001c1bb88703e.rlib --extern fallible_iterator_0_2_0=/playground/target/debug/deps/libfallible_iterator-4efe1a6224a45962.rlib --extern fallible_iterator=/playground/target/debug/deps/libfallible_iterator-2e43403b7663c233.rlib --extern fallible_streaming_iterator=/playground/target/debug/deps/libfallible_streaming_iterator-710dfcc86a75e434.rlib --extern faster_hex=/playground/target/debug/deps/libfaster_hex-0af97615528c7b9e.rlib --extern fastrand=/playground/target/debug/deps/libfastrand-f26e9cfb1b94730c.rlib --extern fdeflate=/playground/target/debug/deps/libfdeflate-08ada09a2af079a2.rlib --extern filetime=/playground/target/debug/deps/libfiletime-97112f31133b84a9.rlib --extern fixedbitset=/playground/target/debug/deps/libfixedbitset-56fdcb6d3120aa1c.rlib --extern flate2=/playground/target/debug/deps/libflate2-2531746255f745f4.rlib --extern fnv=/playground/target/debug/deps/libfnv-b87f391f84df4168.rlib --extern foldhash=/playground/target/debug/deps/libfoldhash-178e39e4a84b8fbf.rlib --extern foreign_types=/playground/target/debug/deps/libforeign_types-bf1014ffc62f0ae0.rlib --extern foreign_types_shared=/playground/target/debug/deps/libforeign_types_shared-e8820102b4a82b58.rlib --extern form_urlencoded=/playground/target/debug/deps/libform_urlencoded-1bdd9f06d2adc69d.rlib --extern futf=/playground/target/debug/deps/libfutf-2ea1554fefb2e49e.rlib --extern futures_0_1_31=/playground/target/debug/deps/libfutures-12ad2a82a5d3e695.rlib --extern futures=/playground/target/debug/deps/libfutures-ab5185401cb8d2dd.rlib --extern futures_channel=/playground/target/debug/deps/libfutures_channel-5a070d86f78181cf.rlib --extern futures_core=/playground/target/debug/deps/libfutures_core-ade5e7159d94df3e.rlib --extern futures_executor=/playground/target/debug/deps/libfutures_executor-58f8f25b856d90b5.rlib --extern futures_io=/playground/target/debug/deps/libfutures_io-db26cc77f13da059.rlib --extern futures_macro=/playground/target/debug/deps/libfutures_macro-8b144d7019fec622.so --extern futures_sink=/playground/target/debug/deps/libfutures_sink-0ce0a82694aa2ad7.rlib --extern futures_task=/playground/target/debug/deps/libfutures_task-563b37bc497c0b01.rlib --extern futures_util=/playground/target/debug/deps/libfutures_util-1e435261c0c28c11.rlib --extern fxhash=/playground/target/debug/deps/libfxhash-60a7f49f81ce4052.rlib --extern generic_array_0_14_7=/playground/target/debug/deps/libgeneric_array-d7ed9cf72d6b705d.rlib --extern generic_array=/playground/target/debug/deps/libgeneric_array-cfdaafb8cd285869.rlib --extern getopts=/playground/target/debug/deps/libgetopts-8cd941a37e074e59.rlib --extern getrandom=/playground/target/debug/deps/libgetrandom-3394ddc8b1b20795.rlib --extern gif=/playground/target/debug/deps/libgif-7b41a0d630389527.rlib --extern gimli=/playground/target/debug/deps/libgimli-50f4b63a6fa6c677.rlib --extern glob=/playground/target/debug/deps/libglob-1e2a4b0324293862.rlib --extern h2=/playground/target/debug/deps/libh2-9e74bbb8f63d697f.rlib --extern half=/playground/target/debug/deps/libhalf-84a67054715a6c20.rlib --extern hashbrown_0_14_5=/playground/target/debug/deps/libhashbrown-461059fe3be3b554.rlib --extern hashbrown=/playground/target/debug/deps/libhashbrown-3c170a22b186aab6.rlib --extern hashlink=/playground/target/debug/deps/libhashlink-4d6be2e6b262807f.rlib --extern heck=/playground/target/debug/deps/libheck-3726cb74edb52c42.rlib --extern hmac=/playground/target/debug/deps/libhmac-7897b20b0a8dd2bd.rlib --extern html5ever_0_26_0=/playground/target/debug/deps/libhtml5ever-acf83f31c62508a6.rlib --extern html5ever=/playground/target/debug/deps/libhtml5ever-fb2dfb9eca018839.rlib --extern http=/playground/target/debug/deps/libhttp-9422b1705b4ff66f.rlib --extern http_body=/playground/target/debug/deps/libhttp_body-3a7dab3317a689f7.rlib --extern http_body_util=/playground/target/debug/deps/libhttp_body_util-30b8af400be76ce3.rlib --extern httparse=/playground/target/debug/deps/libhttparse-50ee3b897baea32e.rlib --extern httpdate=/playground/target/debug/deps/libhttpdate-c240583034884b92.rlib --extern humantime=/playground/target/debug/deps/libhumantime-dfd0f3ccc342d605.rlib --extern hyper=/playground/target/debug/deps/libhyper-fd9741efa459bcab.rlib --extern hyper_rustls=/playground/target/debug/deps/libhyper_rustls-c95d11051ab9fc94.rlib --extern hyper_tls=/playground/target/debug/deps/libhyper_tls-145b80e6515bdf2c.rlib --extern hyper_util=/playground/target/debug/deps/libhyper_util-732310f9be758e51.rlib --extern iana_time_zone=/playground/target/debug/deps/libiana_time_zone-609b050575be28ed.rlib --extern icu_collections=/playground/target/debug/deps/libicu_collections-11fad61f73abb4ea.rlib --extern icu_locid=/playground/target/debug/deps/libicu_locid-960261a688e38984.rlib --extern icu_locid_transform=/playground/target/debug/deps/libicu_locid_transform-848a8cc9d9ed49b3.rlib --extern icu_locid_transform_data=/playground/target/debug/deps/libicu_locid_transform_data-a4204e11e99ad5e9.rlib --extern icu_normalizer=/playground/target/debug/deps/libicu_normalizer-8d71a07dcbaf8e83.rlib --extern icu_normalizer_data=/playground/target/debug/deps/libicu_normalizer_data-248d668afc491381.rlib --extern icu_properties=/playground/target/debug/deps/libicu_properties-1e15af1b1eab6472.rlib --extern icu_properties_data=/playground/target/debug/deps/libicu_properties_data-d44a6f95b58c717b.rlib --extern icu_provider=/playground/target/debug/deps/libicu_provider-31674635d0eb0846.rlib --extern icu_provider_macros=/playground/target/debug/deps/libicu_provider_macros-6d74184e75c17cc4.so --extern idna_0_3_0=/playground/target/debug/deps/libidna-08ed87666df9a7ec.rlib --extern idna_0_5_0=/playground/target/debug/deps/libidna-fa0dcd5680394594.rlib --extern idna=/playground/target/debug/deps/libidna-8734d9044077333e.rlib --extern idna_adapter=/playground/target/debug/deps/libidna_adapter-66c0a896d020d13b.rlib --extern image=/playground/target/debug/deps/libimage-fea53170a2e55566.rlib --extern image_webp=/playground/target/debug/deps/libimage_webp-8032d4809f739a1d.rlib --extern imgref=/playground/target/debug/deps/libimgref-875901718790c8a5.rlib --extern indexmap=/playground/target/debug/deps/libindexmap-4a2ef318e40cac20.rlib --extern iovec=/playground/target/debug/deps/libiovec-ae46b5ec1c048bd5.rlib --extern ipnet=/playground/target/debug/deps/libipnet-2ee1f498296703ff.rlib --extern is_terminal_polyfill=/playground/target/debug/deps/libis_terminal_polyfill-e1bbd92ae5057e7f.rlib --extern itertools_0_12_1=/playground/target/debug/deps/libitertools-253466f9b7fd91e6.rlib --extern itertools=/playground/target/debug/deps/libitertools-c6e372635698f6d9.rlib --extern itoa=/playground/target/debug/deps/libitoa-95d99ed1b7c2d8bb.rlib --extern jobserver=/playground/target/debug/deps/libjobserver-98f3bca0ae495342.rlib --extern jpeg_decoder=/playground/target/debug/deps/libjpeg_decoder-283d39bcc5f09e65.rlib --extern lazy_static=/playground/target/debug/deps/liblazy_static-84db3878870a85d8.rlib --extern lebe=/playground/target/debug/deps/liblebe-72f2d477d6c93f20.rlib --extern libc=/playground/target/debug/deps/liblibc-c14e18dcd30b3ccc.rlib --extern libm=/playground/target/debug/deps/liblibm-135d7f3266d244d2.rlib --extern libsqlite3_sys=/playground/target/debug/deps/liblibsqlite3_sys-a515d55ad4d43b1c.rlib --extern linux_raw_sys_0_4_14=/playground/target/debug/deps/liblinux_raw_sys-816df20e9cde720c.rlib --extern linux_raw_sys=/playground/target/debug/deps/liblinux_raw_sys-2eb2dfc3d6dcfe7a.rlib --extern litemap=/playground/target/debug/deps/liblitemap-d981881f5d7f0857.rlib --extern lock_api=/playground/target/debug/deps/liblock_api-afadb5ad7f707af8.rlib --extern log=/playground/target/debug/deps/liblog-cd2433b9c5b80c28.rlib --extern log_mdc=/playground/target/debug/deps/liblog_mdc-a4c23800e3e739b2.rlib --extern log4rs=/playground/target/debug/deps/liblog4rs-6eba3dbcf0aed211.rlib --extern loop9=/playground/target/debug/deps/libloop9-7cb1fe337c65d81a.rlib --extern mac=/playground/target/debug/deps/libmac-dcc37a52a3bec462.rlib --extern markup5ever_0_11_0=/playground/target/debug/deps/libmarkup5ever-c173a4c0197ad288.rlib --extern markup5ever=/playground/target/debug/deps/libmarkup5ever-a9af7c8db29dd992.rlib --extern markup5ever_rcdom=/playground/target/debug/deps/libmarkup5ever_rcdom-2569a77bbca2ae80.rlib --extern matrixmultiply=/playground/target/debug/deps/libmatrixmultiply-8e6aa901070b6caf.rlib --extern maybe_rayon=/playground/target/debug/deps/libmaybe_rayon-337d4a5d06244d48.rlib --extern md5=/playground/target/debug/deps/libmd5-a8c00386761b4579.rlib --extern memchr=/playground/target/debug/deps/libmemchr-cf32634ccc5ae22a.rlib --extern memmap=/playground/target/debug/deps/libmemmap-40b6d3dae24abb17.rlib --extern memoffset=/playground/target/debug/deps/libmemoffset-5c5f28878e8f3d6e.rlib --extern mime=/playground/target/debug/deps/libmime-09780aa9afc6dfb4.rlib --extern mime_guess=/playground/target/debug/deps/libmime_guess-07af682054486f05.rlib --extern minimal_lexical=/playground/target/debug/deps/libminimal_lexical-c9457124796fd7df.rlib --extern miniz_oxide=/playground/target/debug/deps/libminiz_oxide-9725613fa5aa6157.rlib --extern mio=/playground/target/debug/deps/libmio-2734aa7b7af92f4f.rlib --extern nalgebra=/playground/target/debug/deps/libnalgebra-5f5b14aa5ce04431.rlib --extern nalgebra_macros=/playground/target/debug/deps/libnalgebra_macros-ee1a45cb7ce458b3.so --extern native_tls=/playground/target/debug/deps/libnative_tls-ee4dc31a31df3b45.rlib --extern ndarray=/playground/target/debug/deps/libndarray-1fcdd18aa2887c31.rlib --extern debug_unreachable=/playground/target/debug/deps/libdebug_unreachable-840d25d29f858f82.rlib --extern nom=/playground/target/debug/deps/libnom-1149aa1de3defa8c.rlib --extern noop_proc_macro=/playground/target/debug/deps/libnoop_proc_macro-ba5a81767f96b27f.so --extern num=/playground/target/debug/deps/libnum-709c604b15cf833b.rlib --extern num_bigint=/playground/target/debug/deps/libnum_bigint-38e29d8e428b82af.rlib --extern num_complex=/playground/target/debug/deps/libnum_complex-391c04808c79e01e.rlib --extern num_conv=/playground/target/debug/deps/libnum_conv-8c9cbc0ac81ba02b.rlib --extern num_derive=/playground/target/debug/deps/libnum_derive-24421a413bb14269.so --extern num_integer=/playground/target/debug/deps/libnum_integer-f030523c849ffd1b.rlib --extern num_iter=/playground/target/debug/deps/libnum_iter-087b22cb997be4aa.rlib --extern num_rational=/playground/target/debug/deps/libnum_rational-2f4c8ba73a23bf72.rlib --extern num_traits=/playground/target/debug/deps/libnum_traits-03905c8b1b440abb.rlib --extern num_cpus=/playground/target/debug/deps/libnum_cpus-a6fac69e0ddda5c0.rlib --extern object=/playground/target/debug/deps/libobject-ded34a2d6c1d62e2.rlib --extern once_cell=/playground/target/debug/deps/libonce_cell-8099ad3ea85766d0.rlib --extern openssl=/playground/target/debug/deps/libopenssl-cac7128a0880011c.rlib --extern openssl_macros=/playground/target/debug/deps/libopenssl_macros-a718f9b87f07d165.so --extern openssl_probe=/playground/target/debug/deps/libopenssl_probe-53373f83cce4ce19.rlib --extern openssl_sys=/playground/target/debug/deps/libopenssl_sys-d7f9869d706f96ca.rlib --extern ordered_float=/playground/target/debug/deps/libordered_float-fd8f735b380f8c09.rlib --extern parking_lot=/playground/target/debug/deps/libparking_lot-c8ad94c5e797ed3f.rlib --extern parking_lot_core=/playground/target/debug/deps/libparking_lot_core-f8f2b82de25f211a.rlib --extern paste=/playground/target/debug/deps/libpaste-d57b4cb5f13d84ef.so --extern percent_encoding=/playground/target/debug/deps/libpercent_encoding-98dc5239e671d89b.rlib --extern petgraph=/playground/target/debug/deps/libpetgraph-8a277fb95d34e83b.rlib --extern phf_0_10_1=/playground/target/debug/deps/libphf-923a40d94bee4ab8.rlib --extern phf=/playground/target/debug/deps/libphf-19018ae432ddc4c6.rlib --extern phf_codegen_0_10_0=/playground/target/debug/deps/libphf_codegen-8e7dd7570f82930a.rlib --extern phf_codegen=/playground/target/debug/deps/libphf_codegen-e447cff70d40660b.rlib --extern phf_generator_0_10_0=/playground/target/debug/deps/libphf_generator-f82c48491845c9b1.rlib --extern phf_generator=/playground/target/debug/deps/libphf_generator-3c00c98b6f357ae0.rlib --extern phf_macros=/playground/target/debug/deps/libphf_macros-07a230262807d466.so --extern phf_shared_0_10_0=/playground/target/debug/deps/libphf_shared-445484f4dfe00074.rlib --extern phf_shared=/playground/target/debug/deps/libphf_shared-831de1fff2cc985b.rlib --extern pin_project_lite=/playground/target/debug/deps/libpin_project_lite-df645cdef90b9a26.rlib --extern pin_utils=/playground/target/debug/deps/libpin_utils-a8b22815580c0df9.rlib --extern pkg_config=/playground/target/debug/deps/libpkg_config-7fc5661779f7a24b.rlib --extern png=/playground/target/debug/deps/libpng-6e7e4f3dc5575058.rlib --extern postgres=/playground/target/debug/deps/libpostgres-9f1c890d0dab8253.rlib --extern postgres_protocol=/playground/target/debug/deps/libpostgres_protocol-cb3dc3cf9d7ada65.rlib --extern postgres_types=/playground/target/debug/deps/libpostgres_types-96c86f1cca2c71f1.rlib --extern powerfmt=/playground/target/debug/deps/libpowerfmt-20ed33c5eea869c6.rlib --extern ppv_lite86=/playground/target/debug/deps/libppv_lite86-7c29e9d2d5035884.rlib --extern precomputed_hash=/playground/target/debug/deps/libprecomputed_hash-c8fddb01a9c4ef67.rlib --extern proc_macro2=/playground/target/debug/deps/libproc_macro2-ede31e8cf5816679.rlib --extern profiling=/playground/target/debug/deps/libprofiling-4edf56135b07f28b.rlib --extern profiling_procmacros=/playground/target/debug/deps/libprofiling_procmacros-bd6973f2dc955e75.so --extern psl_types=/playground/target/debug/deps/libpsl_types-4612b0340e7c1853.rlib --extern publicsuffix=/playground/target/debug/deps/libpublicsuffix-56c349e66da82c45.rlib --extern qoi=/playground/target/debug/deps/libqoi-d8b29e1d0034dfc5.rlib --extern quick_error=/playground/target/debug/deps/libquick_error-5674c48264a045c4.rlib --extern quote=/playground/target/debug/deps/libquote-f11ed1aa4f7449cd.rlib --extern rand=/playground/target/debug/deps/librand-45eb366a9d96537b.rlib --extern rand_chacha=/playground/target/debug/deps/librand_chacha-b4560df0fc7b6b3c.rlib --extern rand_core=/playground/target/debug/deps/librand_core-7ad38e7c9b8c49ca.rlib --extern rand_distr=/playground/target/debug/deps/librand_distr-c009d63da1418a26.rlib --extern rav1e=/playground/target/debug/deps/librav1e-cdc4c81d42ed6b89.rlib --extern ravif=/playground/target/debug/deps/libravif-c113c120c0f77531.rlib --extern rawpointer=/playground/target/debug/deps/librawpointer-fa8d476f47f0aa20.rlib --extern rayon=/playground/target/debug/deps/librayon-b9f946ab6762948c.rlib --extern rayon_core=/playground/target/debug/deps/librayon_core-3297163aefeb37aa.rlib --extern regex=/playground/target/debug/deps/libregex-f8b03453683acf4f.rlib --extern regex_automata=/playground/target/debug/deps/libregex_automata-c31ce4286eeda6d8.rlib --extern regex_syntax=/playground/target/debug/deps/libregex_syntax-e5936fc21328abfb.rlib --extern reqwest=/playground/target/debug/deps/libreqwest-fa1201bde9d25957.rlib --extern rgb=/playground/target/debug/deps/librgb-1212554e2015c2b7.rlib --extern ring=/playground/target/debug/deps/libring-c536e807def6d5b5.rlib --extern rusqlite=/playground/target/debug/deps/librusqlite-bceca1c23d81d565.rlib --extern rustc_demangle=/playground/target/debug/deps/librustc_demangle-8333a634c483b8d5.rlib --extern rustc_version=/playground/target/debug/deps/librustc_version-c02c69a7f08a3ca3.rlib --extern rustix=/playground/target/debug/deps/librustix-dbe72a9684792ffa.rlib --extern rustls=/playground/target/debug/deps/librustls-1d787348840045a6.rlib --extern rustls_pemfile=/playground/target/debug/deps/librustls_pemfile-224cbb65fcba4102.rlib --extern rustls_pki_types=/playground/target/debug/deps/librustls_pki_types-f6ad06bfeed5abe8.rlib --extern webpki=/playground/target/debug/deps/libwebpki-df4d0942f2cf8045.rlib --extern ryu=/playground/target/debug/deps/libryu-551d2ebce9ae8853.rlib --extern safe_arch=/playground/target/debug/deps/libsafe_arch-0c2d68c764a14872.rlib --extern same_file=/playground/target/debug/deps/libsame_file-cfa39741801edcf0.rlib --extern scopeguard=/playground/target/debug/deps/libscopeguard-340b0cc7f24af35e.rlib --extern scraper=/playground/target/debug/deps/libscraper-41b9bbd32bee6535.rlib --extern select=/playground/target/debug/deps/libselect-72d7a2dc8e8236a6.rlib --extern selectors=/playground/target/debug/deps/libselectors-b3f3a72a0cd1b4a4.rlib --extern semver=/playground/target/debug/deps/libsemver-e4f77a7cd9313504.rlib --extern serde=/playground/target/debug/deps/libserde-39f8b41506cea496.rlib --extern serde_value=/playground/target/debug/deps/libserde_value-24b06dd26f938d0b.rlib --extern serde_derive=/playground/target/debug/deps/libserde_derive-4b94f3d7bca992c0.so --extern serde_json=/playground/target/debug/deps/libserde_json-04ba26c0eb5bc459.rlib --extern serde_spanned=/playground/target/debug/deps/libserde_spanned-a8a818ebb703fc4e.rlib --extern serde_urlencoded=/playground/target/debug/deps/libserde_urlencoded-0f896f1026e627f3.rlib --extern serde_yaml=/playground/target/debug/deps/libserde_yaml-5a76392369d117a1.rlib --extern servo_arc=/playground/target/debug/deps/libservo_arc-0be54459b06774a2.rlib --extern sha1_smol=/playground/target/debug/deps/libsha1_smol-1e7d99282b441bf9.rlib --extern sha2=/playground/target/debug/deps/libsha2-bb52e223df272e5c.rlib --extern shlex=/playground/target/debug/deps/libshlex-de82d09077986144.rlib --extern signal_hook_registry=/playground/target/debug/deps/libsignal_hook_registry-6959830fc6386cd9.rlib --extern simba=/playground/target/debug/deps/libsimba-7abebb0dd054f129.rlib --extern simd_adler32=/playground/target/debug/deps/libsimd_adler32-0b28677c728d9f9e.rlib --extern simd_helpers=/playground/target/debug/deps/libsimd_helpers-1d2a1ef4af2710cb.so --extern siphasher=/playground/target/debug/deps/libsiphasher-4ec58c711bad9322.rlib --extern slab=/playground/target/debug/deps/libslab-7f77c852bd60c8d1.rlib --extern smallvec=/playground/target/debug/deps/libsmallvec-76c61ed7b2259cd2.rlib --extern socket2=/playground/target/debug/deps/libsocket2-ee5b99b2c255b03b.rlib --extern spin=/playground/target/debug/deps/libspin-a661a6097dc31524.rlib --extern sptr=/playground/target/debug/deps/libsptr-4523619adaafff8b.rlib --extern stable_deref_trait=/playground/target/debug/deps/libstable_deref_trait-8016b5831d479dba.rlib --extern string_cache=/playground/target/debug/deps/libstring_cache-11871b624c98abf7.rlib --extern string_cache_codegen=/playground/target/debug/deps/libstring_cache_codegen-bb8ac9535c1aebdf.rlib --extern stringprep=/playground/target/debug/deps/libstringprep-30445e54db0c8af0.rlib --extern strsim=/playground/target/debug/deps/libstrsim-57e723e1d275d2a2.rlib --extern subtle=/playground/target/debug/deps/libsubtle-cdc086fa44cd23df.rlib --extern syn_1_0_109=/playground/target/debug/deps/libsyn-66579979a839400f.rlib --extern syn=/playground/target/debug/deps/libsyn-fbf7e07c400bcf5a.rlib --extern sync_wrapper=/playground/target/debug/deps/libsync_wrapper-d86f202f928f3af6.rlib --extern synstructure=/playground/target/debug/deps/libsynstructure-8fe8ff8fd1dd6fcc.rlib --extern tar=/playground/target/debug/deps/libtar-b42fc9141f4a6105.rlib --extern tempfile=/playground/target/debug/deps/libtempfile-78563473e2b3e5ef.rlib --extern tendril=/playground/target/debug/deps/libtendril-1b90e8f5dbe3aed1.rlib --extern terminal_size=/playground/target/debug/deps/libterminal_size-ae0458053cd0a8ef.rlib --extern thiserror_1_0_68=/playground/target/debug/deps/libthiserror-e2f9a961d3eed960.rlib --extern thiserror=/playground/target/debug/deps/libthiserror-3d0f2c4653286415.rlib --extern thiserror_impl_1_0_68=/playground/target/debug/deps/libthiserror_impl-d53ba9b2bedaf28e.so --extern thiserror_impl=/playground/target/debug/deps/libthiserror_impl-6e8b2c705ba3d020.so --extern thread_id=/playground/target/debug/deps/libthread_id-a24e263107922ad7.rlib --extern threadpool=/playground/target/debug/deps/libthreadpool-393ce5c1a164c102.rlib --extern tiff=/playground/target/debug/deps/libtiff-c15516e32f6b0c35.rlib --extern time=/playground/target/debug/deps/libtime-40d5e75495f212b9.rlib --extern time_core=/playground/target/debug/deps/libtime_core-c9eb71dd38667470.rlib --extern time_macros=/playground/target/debug/deps/libtime_macros-e3b8ffedc7d8ef8e.so --extern tinystr=/playground/target/debug/deps/libtinystr-a560e42bf1214519.rlib --extern tinyvec=/playground/target/debug/deps/libtinyvec-01014611c3ebaca1.rlib --extern tinyvec_macros=/playground/target/debug/deps/libtinyvec_macros-38d91b622583a154.rlib --extern tokio=/playground/target/debug/deps/libtokio-031fb36f8ad20dd6.rlib --extern tokio_io=/playground/target/debug/deps/libtokio_io-e62bf66f5fb32972.rlib --extern tokio_macros=/playground/target/debug/deps/libtokio_macros-70aba07cf8f52ee7.so --extern tokio_native_tls=/playground/target/debug/deps/libtokio_native_tls-3a27c913898e18e3.rlib --extern tokio_postgres=/playground/target/debug/deps/libtokio_postgres-5e457ab9d80cb74b.rlib --extern tokio_rustls=/playground/target/debug/deps/libtokio_rustls-d022de072032ffae.rlib --extern tokio_stream=/playground/target/debug/deps/libtokio_stream-5638c6f8d6748952.rlib --extern tokio_util=/playground/target/debug/deps/libtokio_util-3c9c041122a79232.rlib --extern toml=/playground/target/debug/deps/libtoml-b75cb5c98a2706ef.rlib --extern toml_datetime=/playground/target/debug/deps/libtoml_datetime-737e89e817525a71.rlib --extern toml_edit=/playground/target/debug/deps/libtoml_edit-96c3724ef948b3fa.rlib --extern tower_service=/playground/target/debug/deps/libtower_service-67f86bcdfe7f3737.rlib --extern tracing=/playground/target/debug/deps/libtracing-0a5edb1c4af71a89.rlib --extern tracing_attributes=/playground/target/debug/deps/libtracing_attributes-05bc5c40fae875ba.so --extern tracing_core=/playground/target/debug/deps/libtracing_core-609799ac36ff7b7d.rlib --extern trpl=/playground/target/debug/deps/libtrpl-59080b997f178144.rlib --extern try_lock=/playground/target/debug/deps/libtry_lock-f73a7627d08d7047.rlib --extern typemap_ors=/playground/target/debug/deps/libtypemap_ors-51c7ee1f9b2a5399.rlib --extern typenum=/playground/target/debug/deps/libtypenum-603f08e78de3ce4b.rlib --extern unicase=/playground/target/debug/deps/libunicase-aa9b5acaef17539c.rlib --extern unicode_bidi=/playground/target/debug/deps/libunicode_bidi-8be75f6cd303557c.rlib --extern unicode_ident=/playground/target/debug/deps/libunicode_ident-9a3c5b7b5d5de68f.rlib --extern unicode_normalization=/playground/target/debug/deps/libunicode_normalization-9cece50b5464e01e.rlib --extern unicode_properties=/playground/target/debug/deps/libunicode_properties-8ffc241302e1b4dc.rlib --extern unicode_segmentation=/playground/target/debug/deps/libunicode_segmentation-b5caa71ac4ebf013.rlib --extern unicode_width_0_1_14=/playground/target/debug/deps/libunicode_width-2f43b42225182ac7.rlib --extern unicode_width=/playground/target/debug/deps/libunicode_width-457f1ce94dc2cdbf.rlib --extern unicode_xid=/playground/target/debug/deps/libunicode_xid-9ce15e2aa1697488.rlib --extern unsafe_any_ors=/playground/target/debug/deps/libunsafe_any_ors-4cd5a6568ea186a8.rlib --extern unsafe_libyaml=/playground/target/debug/deps/libunsafe_libyaml-c1f60569d89702bc.rlib --extern untrusted=/playground/target/debug/deps/libuntrusted-f32fc85046cdf3b5.rlib --extern url=/playground/target/debug/deps/liburl-a6c9241f7e3b1c52.rlib --extern utf8=/playground/target/debug/deps/libutf8-fe4a0f906b99069f.rlib --extern utf16_iter=/playground/target/debug/deps/libutf16_iter-a9319290c852e888.rlib --extern utf8_iter=/playground/target/debug/deps/libutf8_iter-e356b5e074d8fbdf.rlib --extern utf8parse=/playground/target/debug/deps/libutf8parse-e6bfae3e5f3a02ea.rlib --extern uuid=/playground/target/debug/deps/libuuid-00b6897e935368cb.rlib --extern v_frame=/playground/target/debug/deps/libv_frame-aa764e6490955c8c.rlib --extern vcpkg=/playground/target/debug/deps/libvcpkg-0c42e146e111ef9b.rlib --extern version_check=/playground/target/debug/deps/libversion_check-cd8cba67fb70d9cf.rlib --extern walkdir=/playground/target/debug/deps/libwalkdir-48fb32db35972a24.rlib --extern want=/playground/target/debug/deps/libwant-ea61a245047a353e.rlib --extern wasm_bindgen=/playground/target/debug/deps/libwasm_bindgen-f109be4651dd98d6.rlib --extern wasm_bindgen_backend=/playground/target/debug/deps/libwasm_bindgen_backend-3c4deb59bdad5d87.rlib --extern wasm_bindgen_macro=/playground/target/debug/deps/libwasm_bindgen_macro-71d4ccee6d94d74b.so --extern wasm_bindgen_macro_support=/playground/target/debug/deps/libwasm_bindgen_macro_support-d14f9aebdd73e61b.rlib --extern wasm_bindgen_shared=/playground/target/debug/deps/libwasm_bindgen_shared-a314b671f23ee592.rlib --extern weezl=/playground/target/debug/deps/libweezl-ef83fd076c0c214d.rlib --extern whoami=/playground/target/debug/deps/libwhoami-3eb1d83928907212.rlib --extern wide=/playground/target/debug/deps/libwide-2f214f92e767bcc5.rlib --extern windows_sys=/playground/target/debug/deps/libwindows_sys-eda5aef5dedf894f.rlib --extern windows_targets=/playground/target/debug/deps/libwindows_targets-cde2aa27eefe7f21.rlib --extern windows_x86_64_gnu=/playground/target/debug/deps/libwindows_x86_64_gnu-21d1c15eea2af17b.rlib --extern windows_x86_64_msvc=/playground/target/debug/deps/libwindows_x86_64_msvc-8238ddcf3c9ba6e7.rlib --extern winnow=/playground/target/debug/deps/libwinnow-7d0e677d8d7b728c.rlib --extern write16=/playground/target/debug/deps/libwrite16-5df432857f65960b.rlib --extern writeable=/playground/target/debug/deps/libwriteable-c18f025431da77ee.rlib --extern xattr=/playground/target/debug/deps/libxattr-91ff8b2e727fcf5f.rlib --extern xml5ever=/playground/target/debug/deps/libxml5ever-182c2563e62af7b5.rlib --extern yoke=/playground/target/debug/deps/libyoke-9777fcaf4153b460.rlib --extern yoke_derive=/playground/target/debug/deps/libyoke_derive-685a2ece968b1733.so --extern zerocopy=/playground/target/debug/deps/libzerocopy-03f90df513811c3c.rlib --extern zerocopy_derive=/playground/target/debug/deps/libzerocopy_derive-b6354b86eeec9e47.so --extern zerofrom=/playground/target/debug/deps/libzerofrom-4d1eb2e5e89ddfe4.rlib --extern zerofrom_derive=/playground/target/debug/deps/libzerofrom_derive-d37eb232fb9affe5.so --extern zeroize=/playground/target/debug/deps/libzeroize-429d0fc444d8193b.rlib --extern zerovec=/playground/target/debug/deps/libzerovec-224d162197049700.rlib --extern zerovec_derive=/playground/target/debug/deps/libzerovec_derive-5a6f5897e199595e.so --extern zune_core=/playground/target/debug/deps/libzune_core-067fe4b09cd8c8fd.rlib --extern zune_inflate=/playground/target/debug/deps/libzune_inflate-26f56dc0fa212a11.rlib --extern zune_jpeg=/playground/target/debug/deps/libzune_jpeg-45f9f57cf992e29e.rlib -L native=/playground/target/debug/build/libsqlite3-sys-551b8fe7af4088ec/out -L native=/playground/target/debug/build/ring-d57452a737d06118/out -L native=/playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/windows_x86_64_gnu-0.52.6/lib -L native=/playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/windows_x86_64_msvc-0.52.6/lib` (exit status: 101)
```
</p>
</details>
### Related Issues
- https://github.com/rust-lang/rust/issues/40180
- https://github.com/rust-lang/rust/issues/132835
- https://github.com/rust-lang/rust/issues/132839
- https://github.com/rust-lang/rust/issues/132841 | A-LLVM,O-x86_64,T-compiler,C-bug,A-ABI,O-x86_32,F-abi_x86_interrupt,A-hardware-interrupts | low | Critical |
2,646,886,284 | pytorch | inconsistency in ```torch.nn.functional.log_softmax``` on CPU and GPU | ### ๐ Describe the bug
Testing consistency of ```torch::nn::functional::log_softmax``` between CPU and GPU using bfloat16 data.
```python #
#include <iostream>
#include <torch/torch.h>
int main() {
torch::Tensor input = torch::tensor(
{
{
{
{{-1.6641, -0.9219}, {2.5156, -0.4648}},
{{1.0469, 1.4531}, {0.0908, 0.1514}}
},
{
{{-1.2422, 1.2656}, {-1.0859, -0.0801}},
{{2.3750, -0.4238}, {-0.9023, -0.4570}}
}
},
{
{
{{-1.3125, 0.1099}, {-0.2129, -0.6641}},
{{-0.3867, -0.2617}, {1.1641, 0.4043}}
},
{
{{-0.5352, -0.4785}, {0.2061, 0.2734}},
{{-1.3672, 0.0544}, {-1.3203, -1.5469}}
}
}
},
torch::kBFloat16);
auto options =
torch::nn::functional::LogSoftmaxFuncOptions(-1)
.dtype(c10::nullopt);
auto result_cpu = torch::nn::functional::log_softmax(input, options);
torch::Tensor input_cuda = input.cuda();
auto result_gpu = torch::nn::functional::log_softmax(input_cuda, options);
std::cout << "initialized tensor (CPU):\n" << input << std::endl;
std::cout << "CPU result: \n" << result_cpu << std::endl;
std::cout << "GPU result: \n" << result_gpu << std::endl;
bool inconsistent = !torch::allclose(result_cpu, result_gpu.cpu(), 1e-03, 1e-02);
std::cout << "inconsistency with atol=1e-02 and rtol=1e-03: " << std::boolalpha << inconsistent << std::endl;
return 0;
}
```
outputs (e.g take a look at matrices (1,1,2), (1,2,2), and (2,2,1)โthey show notable differences in the log_softmax output.):
```
initialized tensor (CPU):
(1,1,1,.,.) =
-1.6641 -0.9219
2.5156 -0.4648
(2,1,1,.,.) =
-1.3125 0.1099
-0.2129 -0.6641
(1,2,1,.,.) =
-1.2422 1.2656
-1.0859 -0.0801
(2,2,1,.,.) =
-0.5352 -0.4785
0.2061 0.2734
(1,1,2,.,.) =
1.0469 1.4531
0.0908 0.1514
(2,1,2,.,.) =
-0.3867 -0.2617
1.1641 0.4043
(1,2,2,.,.) =
2.3750 -0.4238
-0.9023 -0.4570
(2,2,2,.,.) =
-1.3672 0.0544
-1.3203 -1.5469
[ CPUBFloat16Type{2,2,2,2,2} ]
CPU result:
(1,1,1,.,.) =
-1.1328 -0.3906
-0.0459 -3.0312
(2,1,1,.,.) =
-1.6406 -0.2168
-0.4941 -0.9453
(1,2,1,.,.) =
-2.5781 -0.0752
-1.3203 -0.3125
(2,2,1,.,.) =
-0.7188 -0.6641
-0.7266 -0.6602
(1,1,2,.,.) =
-0.9141 -0.5078
-0.7188 -0.6602
(2,1,2,.,.) =
-0.7578 -0.6328
-0.3848 -1.1406
(1,2,2,.,.) =
0.01 *
-6.0547 -285.9375
-93.7500 -49.4141
(2,2,2,.,.) =
-1.6406 -0.2168
-0.5859 -0.8125
[ CPUBFloat16Type{2,2,2,2,2} ]
GPU result:
(1,1,1,.,.) =
-1.1328 -0.3887
-0.0496 -3.0312
(2,1,1,.,.) =
-1.6406 -0.2158
-0.4922 -0.9453
(1,2,1,.,.) =
-2.5938 -0.0781
-1.3203 -0.3125
(2,2,1,.,.) =
-0.7227 -0.6641
-0.7266 -0.6602
(1,1,2,.,.) =
-0.9180 -0.5117
-0.7227 -0.6641
(2,1,2,.,.) =
-0.7578 -0.6328
-0.3828 -1.1406
(1,2,2,.,.) =
0.01 *
-5.9082 -285.9375
-94.1406 -49.4141
(2,2,2,.,.) =
-1.6406 -0.2158
-0.5859 -0.8125
[ CUDABFloat16Type{2,2,2,2,2} ]
inconsistency with atol=1e-02 and rtol=1e-03: true
```
### Versions
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 16.0.4 (https://github.com/llvm/llvm-project ae42196bc493ffe877a7e3dff8be32035dea4d07)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.10
Is CUDA available: N/A
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.78
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==3.8.4
[pip3] numpy==1.19.2
[pip3] numpydoc==1.1.0
[pip3] torch==2.2.0a0+git9fa3350
[conda] blas 1.0 mkl
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] torch 2.2.0a0+git9fa3350 dev_0 | module: numerical-stability,triaged,module: bfloat16 | low | Critical |
2,646,922,795 | rust | `extern "x86-interrupt" fn` allows absurd signatures | I tried this code:
```rust
#![feature(abi_x86_interrupt)]
extern "x86-interrupt" fn three_args(_a: u8, _b: u8, _c: u8) {}
fn main() {
three_args(1, 2, 3);
}
```
I expected to see rustc reject this code, because this signature makes no sense for this ABI.
Instead, this happened:
> rustc-LLVM ERROR: unsupported x86 interrupt prototype
> error: could not compile `playground` (bin "playground")
## Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (a0d98ff0e 2024-10-31)
binary: rustc
commit-hash: a0d98ff0e5b6e1f2c63fd26f68484792621b235c
commit-date: 2024-10-31
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
@rustbot label: +F-abi_x86_interrupt +A-LLVM +O-x86_64 +O-x86_32 +A-ABI +T-compiler
### Related Issues
- https://github.com/rust-lang/rust/issues/40180
- https://github.com/rust-lang/rust/issues/63018
- https://github.com/rust-lang/rust/issues/126418
- https://github.com/rust-lang/rust/issues/132834 | A-LLVM,O-x86_64,T-compiler,C-bug,A-ABI,O-x86_32,F-abi_x86_interrupt,A-hardware-interrupts | low | Critical |
2,646,928,915 | rust | Directly calling `extern "riscv-interrupt-{m,s}" fn`... compiles? | I tried this code with `rustc --target riscv64gc-unknown-linux-gnu rust_code.rs`:
```rust
#![feature(abi_riscv_interrupt)]
pub extern "riscv-interrupt-m" fn interrupt_machine() {}
pub extern "riscv-interrupt-s" fn interrupt_supervisor() {}
pub fn main() {
interrupt_machine();
interrupt_supervisor();
}
```
I expected to see this happen: rustc *or LLVM* reject the direct call.
Instead, this happened:
...wait, why does it compile? This doesn't seem like it should compile...
## Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (a0d98ff0e 2024-10-31)
binary: rustc
commit-hash: a0d98ff0e5b6e1f2c63fd26f68484792621b235c
commit-date: 2024-10-31
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.
```
@rustbot label: +A-hardware-interrupts +A-LLVM +O-riscv +A-ABI +T-compiler
### Related Issues
- https://github.com/rust-lang/rust/issues/111889
- https://github.com/rust-lang/rust/issues/132834 | A-LLVM,T-compiler,C-bug,O-riscv,O-AVR,A-ABI,A-hardware-interrupts | low | Major |
2,646,963,138 | rust | Most `extern "*-interrupt-*" fn` should enforce 0-sized signatures | I tried this code with `rustc --target riscv64gc-unknown-linux-gnu rust_code.rs`:
```rust
#![feature(abi_riscv_interrupt)]
pub extern "riscv-interrupt-m" fn interrupt_machine(_a: u8, _b: u8, _c: u8) {
}
pub extern "riscv-interrupt-s" fn interrupt_supervisor(_a: u8, _b: u8, _c: u8) {
}
pub fn main() {
interrupt_machine(1, 2, 3);
interrupt_supervisor(4, 5, 6);
}
```
I expected to see rustc catch this invalid signature.
Instead, this happened:
> rustc-LLVM ERROR: Functions with the interrupt attribute cannot have arguments!
My understanding is that the same story applies for non-x86 interrupt ABIs like MSP430's.
## Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (a0d98ff0e 2024-10-31)
binary: rustc
commit-hash: a0d98ff0e5b6e1f2c63fd26f68484792621b235c
commit-date: 2024-10-31
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
@rustbot label: +A-hardware-interrupts +A-LLVM +O-riscv +O-msp430 +A-ABI +T-compiler
### Related Issues
- https://github.com/rust-lang/rust/issues/38487
- https://github.com/rust-lang/rust/issues/111889
- https://github.com/rust-lang/rust/issues/132835
- https://github.com/rust-lang/rust/issues/132836 | A-LLVM,T-compiler,C-bug,O-riscv,O-AVR,A-ABI,O-msp430,A-hardware-interrupts | low | Critical |
2,646,965,703 | electron | "Object has been destroyed at DownloadItem" error occurs when downloading | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS monterey 12.5.1
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior

While downloading an item, if the electron application closes, sometimes "Object has been destroyed" error occurs.
This error message should always not open.
### Actual Behavior
This error occurs in rare cases(in my experience, with a nearly 1% chance).
You can reproduces this error by executing the [below git url](https://github.com/mochang2/object-has-been-destroyed-poc) after commenting out 44\~49th lines of `src/main.ts`.
(I've automated the action with `run.sh `, so it will happen if you run that shell script a several times)
The way to run into the error in certain(in 100%) is to remove comment 44\~49th lines of `src/main.ts`, which adds a long synchronous action in an "updated" event listener of `DownloadItem`.
In my opinion, this error occurs if the following situation is:
1. "updated" event emit
```cc
// electron_api_download_item.cc
void DownloadItem::OnDownloadUpdated(download::DownloadItem* item) {
if (!CheckAlive())
return;
if (download_item_->IsDone()) {
Emit("done", item->GetState());
Unpin();
} else {
Emit("updated", item->GetState());
}
}
```
2. Even if window(can get by `BrowserWindow.getFocusedWindow()`) is destroyed, callbacks in the callback queue remains.
3. "updated" event on
```ts
// main.ts
item.on("updated", () => {
for (let i = 0; i < 1_000_000_000; i++) {
// a long synchronous task
if (i % 200_000_000 === 0) {
console.log(i, window.isDestroyed());
}
}
console.log(window.isDestroyed());
console.log("updated", window.webContents.isDestroyed()); // error point!!
});
```
Currently, the only way to prevent errors seems to be the following.
1. `try-catch`
```ts
// main.js
item.on("updated", () => {
try {
// ...
console.log("updated", window.webContents.isDestroyed());
} catch (error) {
console.error(error);
}
});
```
2. `if`
```ts
// main.ts
item.on("updated", () => {
if (window?.webContents) {
// ...
console.log("updated", window.webContents.isDestroyed());
};
});
```
However, the above solution doesn't seem to be good because all event listers need these exceptions or error handling logic.
This is also the case in that the event listener has to pay attention to whether the window has been destroyed.
Can you develop a better way for this error?
### Testcase Gist URL
https://github.com/mochang2/object-has-been-destroyed-poc
### Additional Information
For the first time, I found this error message in Windows 10 and Electron 22.3.8.
As MacOS with Electron 32.1.2 also encounters this error, I think this bug happens regardless of OS or electronic version. | platform/macOS,bug :beetle:,status/confirmed,32-x-y,33-x-y,34-x-y | low | Critical |
2,646,973,035 | rust | Directly calling `"msp430-interrupt" fn` should also be invalid | I tried this code:
```rust
#![feature(abi_msp430_interrupt)]
pub extern "msp430-interrupt" fn msp430_isr() {}
pub fn call() {
msp430_isr();
}
```
I expected to see rustc reject the erroneous code somewhere before final LLVM lowering, because the notion of "calling" an interrupt still seems nonsensical to me, no matter what its signature is.
Instead, this happened:
> rustc-LLVM ERROR: ISRs cannot be called directly
## Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (a0d98ff0e 2024-10-31)
binary: rustc
commit-hash: a0d98ff0e5b6e1f2c63fd26f68484792621b235c
commit-date: 2024-10-31
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
@rustbot label: +A-hardware-interrupts +A-LLVM +O-msp430 +A-ABI +T-compiler
### Related Issues
- https://github.com/rust-lang/rust/issues/38487
- https://github.com/rust-lang/rust/issues/132834
- https://github.com/rust-lang/rust/issues/132836 | A-LLVM,T-compiler,C-bug,A-ABI,O-msp430,A-hardware-interrupts | low | Critical |
2,647,009,297 | svelte | `oncanplay` event called after video element is removed | ### Describe the bug
When a video element is removed in svelte 4 the `on:canplay` event handler isn't called, however in svelte 5 (with and without runes) the event is called. In firefox it also causes the video to continue playing in the background, because `e.currentTarget.play()` is called in the event handler.
### Reproduction
https://svelte.dev/playground/d901783aed2c4df68c14459b90b3f47f?version=4.2.19
https://svelte.dev/playground/698042a4c231415fa69892808ac7a247?version=5.1.13
### Logs
_No response_
### System Info
```shell
Firefox 132.0.1
```
### Severity
annoyance | bug | low | Critical |
2,647,016,926 | PowerToys | ่ง้ขๆไปถ็ผฉ็ฅๅพ้ข่ง๏ผๅpsdๆไปถ้ข่ง | ### Description of the new feature / enhancement
่ง้ขๆไปถ็ผฉ็ฅๅพ้ข่ง๏ผๅpsdๆไปถ้ข่ง
### Scenario when this would be used?
่ง้ขๆไปถ็ผฉ็ฅๅพ้ข่ง๏ผๅpsdๆไปถ้ข่ง
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,647,019,846 | rust | All interrupt ABIs should enforce either `()` or `!` as return types | I tried this code: [Godbolt](https://godbolt.org/z/75xfedEa9)
I expected to see this happen: ...interrupts don't really return things, they're side effects, that's kind of in the definition, so these shouldn't return anything, and the ABI should enforce that, or during lowering..?
Instead, this happened:
> rustc-LLVM ERROR: ISRs cannot return any value
> rustc-LLVM ERROR: Functions with the interrupt attribute must have void return type!
> rustc-LLVM ERROR: X86 interrupts may not return any value
## msp430
```rust
#![feature(abi_msp430_interrupt)]
pub extern "msp430-interrupt" fn msp430_isr() -> u64 {
9001u64
}
pub fn call() {
let _ = msp430_isr();
}
```
## riscv
```rust
#![feature(abi_riscv_interrupt)]
pub extern "riscv-interrupt-m" fn interrupt_machine() -> u64 {
9001u64
}
pub extern "riscv-interrupt-s" fn interrupt_supervisor() -> u64 {
9002u64
}
pub fn main() {
let a = interrupt_machine();
let b = interrupt_supervisor();
println!("{}", a + b);
}
```
## x86
```rust
#![feature(abi_x86_interrupt)]
pub extern "x86-interrupt" fn interrupt_machine() -> u64 {
9001u64
}
pub fn main() {
let a = interrupt_machine();
println!("{}", a);
}
```
## Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (a0d98ff0e 2024-10-31)
binary: rustc
commit-hash: a0d98ff0e5b6e1f2c63fd26f68484792621b235c
commit-date: 2024-10-31
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
```
@rustbot label: +A-ABI +A-hardware-interrupts +A-LLVM +F-abi_x86_interrupt +O-x86_64 +O-x86_32 +O-riscv +O-msp430 +T-compiler
### Related Issues
- https://github.com/rust-lang/rust/issues/38487
- https://github.com/rust-lang/rust/issues/40180
- https://github.com/rust-lang/rust/issues/111889
- https://github.com/rust-lang/rust/issues/132835
- https://github.com/rust-lang/rust/issues/132836
- https://github.com/rust-lang/rust/issues/132837
- https://github.com/rust-lang/rust/issues/132839 | A-LLVM,O-x86_64,T-compiler,C-bug,O-riscv,A-ABI,O-msp430,O-x86_32,F-abi_x86_interrupt,A-hardware-interrupts | low | Critical |
2,647,067,689 | svelte | Missing type in Svelte compiler AST | ### Describe the bug
The script part of AST specifically imports are missing `importKind` type
### Reproduction
```ts
import { parse } from 'svelte/compiler';
const ast = parse(`
<script lang="ts">
import type { Snippet } from 'svelte'
import { mount } from 'svelte'
</script>
`, { modern: true })
// narrow down type to import declarations
if (ast.instance?.content.body[0].type === 'ImportDeclaration') {
// this gives error // Property 'importKind' does not exist on type 'ImportDeclaration'.
ast.instance?.content.body[0].importKind
// But if we log it we can see it dose have them
console.log(ast.instance?.content.body[0].importKind) // type
console.log(ast.instance?.content.body[1].importKind) // value
}
```
### Logs
_No response_
### System Info
```shell
Not important
```
### Severity
annoyance | types / typescript | low | Critical |
2,647,124,288 | vscode | Strcky scroll cannot support multi line declaration in C. |
Type: <b>Bug</b>
void test(u8 a,
u8 b,
u8 c,
u8 d)
{
/**
* @brief
*
*
*/
}
VS Code version: Code 1.95.2 (Universal) (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|1, 1, 1|
|Memory (System)|16.00GB (0.39GB free)|
|Process Argv|--crash-reporter-id 958beaca-28e6-46a8-afa8-aed297dcf1f7|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (23)</summary>
Extension|Author (truncated)|Version
---|---|---
language-x86-64-assembly|13x|3.1.4
doxdocgen|csc|1.4.0
githistory|don|0.6.20
gitlens|eam|15.6.3
comment-translate|int|3.0.0
cortex-debug|mar|1.12.1
asm-code-lens|maz|2.6.1
debug-tracker-vscode|mcu|0.0.15
memory-view|mcu|0.0.25
peripheral-viewer|mcu|1.4.6
rtos-views|mcu|0.0.7
git-graph|mhu|1.30.0
cpptools|ms-|1.22.11
hexeditor|ms-|1.11.1
makefile-tools|ms-|0.11.13
vscode-serial-monitor|ms-|0.13.1
color-highlight|nau|2.8.0
phind|phi|0.25.4
excalidraw-editor|pom|3.7.4
highlight-words|rsb|0.1.4
markdown-preview-enhanced|shd|0.8.15
open-in-browser|tec|2.0.0
vscode-conventional-commits|viv|1.26.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
h409b430:31177055
```
</details>
<!-- generated by issue reporter --> | feature-request,editor-sticky-scroll | low | Critical |
2,647,164,459 | godot | `RenderingServer.material_set_param` doesn't work with texture based objects | ### Tested versions
- Reproducible in master, 4.4 dev 4, 4.3 stable
### System information
Godot v4.4.dev4 - Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated AMD Radeon RX 580 2048SP (Advanced Micro Devices, Inc.; 31.0.21921.1000) - AMD Ryzen 5 3600 6-Core Processor (12 threads)
### Issue description
As [this question](https://forum.godotengine.org/t/using-renderingserver-material-set-param-to-set-sampler2d-parameter-not-working/85165) in Godot forum shows, `RenderingServer.material_set_param` don't allow to set sampler uniforms with a texture object, which diverges from `Material.set_shader_parameter` that works normally with textures as parameter. That happens because `RenderingServer.material_set_param` rejects any `Object` as value:
https://github.com/godotengine/godot/blob/e65a23762b36b564eb94672031f37fdadba72333/servers/rendering/renderer_rd/storage_rd/material_storage.cpp#L2223-L2232
### Steps to reproduce
1. Create a shader with an uniform variable of type Sampler2D and assign this shader to a node
2. Load any image (the Godot icon as example)
3. Try to set the shader uniform variable using `RenderingServer.material_set_param` and the loaded image
4. Godot will fail with error: `Condition "p_value.get_type() == Variant::OBJECT" is true.`
Or play the MRP below.
### Minimal reproduction project (MRP)
[test-uniform-set-with-rendering-server.zip](https://github.com/user-attachments/files/17691279/test-uniform-set-with-rendering-server.zip)
| enhancement,discussion,topic:rendering | low | Critical |
2,647,166,765 | godot | C# Reference in a Inherited Scene breaks (Null or Disposed) | ### Tested versions
- Reproducable in 4.3stable, 4.4dev4
### System information
Windows10
### Issue description
I have following Nodes:
- ActorBase : Node2D
- ActorFarmer: Actor
- and some Child Components
I create a Scene where the base node is an "ActorBase". I reference this in child nodes.
I create a Inherited Scene from the "ActorBase" Scene. Instead of the ActorBase Script I add an "ActorFarmer".
All references are still present in the Editor when I click the revert Button. When I start the game the Reference to the Actor is null or disposed.
When I dont set the reference in the Base Scene and directly in the Inherited Scene, I dont get the issue.
https://github.com/user-attachments/assets/488dff6b-308c-4fb2-838c-ae4c3a119ede
### Steps to reproduce
1. Create a Base Class
2. Add References to other Nodes
3. Inherit from the Base Class
4. Create a Base Scene
5. Drag the references to the Base Class
6. Create a Inherited Scene from the Base Scene
### Minimal reproduction project (MRP)
When you run the MRP, you can run on the remote tab. Then click for example on AnimationPlayer2d, then you get an ObjectDisposed Exception.

[InheritanceBug.zip](https://github.com/user-attachments/files/17691283/InheritanceBug.zip) | bug,topic:dotnet | low | Critical |
2,647,179,627 | go | proposal: x/net/html: parser hooks | ### Proposal Details
The following is not something I _need_ right now, but my use case made me think that there is a valid reason for the code calling `html.Parse` to be able to influence the parsing process.
I'm playing around with a crazy idea, to create a headless browser written in Go (I have a POC that proves I can write the DOM in Go, and expose the objects to JavaScript - I have a v8 engine integrated).
I have used `x/net/html` to parse HTML; but there are some rules for constructing the DOM in the browser that the library doesn't support (and I don't think it should _implement_ those rules - but let the caller deal with them).
For example, when a `<script>` element is connected, the script is executed. Which means the script doesn't see the entire "HTML source" (or the DOM representation of the entire source to be exact). Scripts aren't the only types of element with these types of rules wither.
I can work around that right now; but makes me want to ask the question, if support should be added to `x/net/html` that allows the caller to install hooks in the process. Events like `inserted`, `connected`, `removed` seems like candidates.
As per the [DOM tree specs](https://html.spec.whatwg.org/multipage/infrastructure.html#dom-trees)
> An HTML element can have specific HTML element insertion steps, HTML element post-connection steps, and HTML element removing steps, all defined for the element's [local name](https://dom.spec.whatwg.org/#concept-element-local-name).
**My current approach**
The path I have decided to take forward is to first use the `x/net/html` package to construct a node tree, and then iterate that tree to construct my own tree. So basically I'll perform two passes of DOM processing, and I can implement the necessary rules in the 2nd pass.
I will most likely use the `html.Node` instance as a "backing field" for DOM data, as it gives the benefits:
- I get HTML rendering out of the box, e.g. `element.outerHTML`/`innerHTML`
- Libraries exist already for XPath search
Even if I could hook into the parser process I still need to create wrapper types for various reasons. E.g., I need to expose a different interface to JavaScript, and I need a binding layer to JavaScript.
**What benefits would the hooks give**
If the package supported hooks you could generate the "correct DOM" in one pass.
In my case, the wrapper objects could then just be created lazily.
**Alternate solution**
The suggested _hooks_ is just _one possible solution_ to the problem;
Another potential solution could be to let the caller pass in some kind of "Node Factory". But that does seem more complex.
**A bit background about the idea**
The idea was basically to be able to test HTTP apps written in Go; but where client-side script plays an important role. The standard library provides everything necessary if the test only need to inspect the HTTP response, e.g. response headers or body. But if you want to verify that the behavior JavaScript, you need something more. An example could be using HTMX and Go, a tech-combo that seems to be gaining some popularity. Here, the behavior of the app depends on setting the right attributes on specific HTML elements, and be sure that the returned HTTP responses play well with those tags.
Using a _real_ browser in headless has a huge overhead; which tends to discourage TDD; which otherwise provides a fast feedback loop.
Being in written in Go, it easily bypass the TCP layer entirely and connect directly to the root `net/http.Handler`. This makes it easy to let tests run in parallel with each test using with their own http handler; which could possibly have different dependencies mocked or stubbed for different test cases.
The project is as (but this is an extremely early prototype) is in https://github.com/stroiman/go-dom | Proposal | low | Minor |
2,647,187,658 | PowerToys | PowerRename Utility | ### Description of the new feature / enhancement
I have tried the feature and it's great! however it only maxes at 20 files. Can you make the maximum number higher like 999 or unlimited?
### Scenario when this would be used?
A lot of users handles lots of files that need renaming especially documents, music or video files, and many of us rename files by the thousands. I personally use a third party app for that but if it's already included in powertoys, accessible via context menu, it will make our renaming easier
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,647,193,310 | TypeScript | "Referenced projects must have the new composite setting enabled" does not hold true | ### ๐ Search Terms
composite, project references, solution-style tsconfig
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about tsconfig.
### โฏ Playground Link
https://vite.new/react-ts
### ๐ป Code
The above link is a project created from the vite react-ts template (https://vite.new/react-ts).
It uses project references in `tsconfig.json`, but no `composite: true` specified in both `tsconfig.app.json` and `tsconfig.node.json`.
On the other hand, the [TypeScript handbook](https://www.typescriptlang.org/docs/handbook/project-references.html#composite) says:
> Referenced projects must have the new [composite](https://www.typescriptlang.org/tsconfig#composite) setting enabled.
### ๐ Actual behavior
The project builds (`npm run build`) successfully without `composite: true`.
### ๐ Expected behavior
If I understand correctly, the documentation means I have to specify `composite: true` when using `references`. I'd expect an error or warning if I didn't follow the guide.
### Additional information about the issue
I'm curious whether it's a bug or a feature.
If it's a bug, what's the expected behavior?
If it's a feature, I'd like to see the documentation updated accordingly. | Docs | low | Critical |
2,647,218,321 | ollama | Inconsistent Embedding Results with Non-Power-of-Two Context Sizes | ### What is the issue?
When using different context sizes (`num_ctx`) with the Ollama embedding model, I noticed big differences in the cosine similarity of the embeddings. Specifically, when I set the context size to a non-power-of-two (like 513), the similarity scores drop significantly compared to powers of two (like 512 or 1024). This suggests that the model might be optimized for powers of two, leading to inconsistent results with other values.
In contrast, other embedding providers like FastEmbed and Sentence Transformers produce stable results even with context sizes like `2^x + 1` (e.g., 513). The similarity between FastEmbed and Sentence Transformers embeddings is nearly perfect, regardless of context size, indicating that this issue seems specific to Ollama.
### Steps to Reproduce
1. Run the code below to generate embeddings with Ollama using different context sizes (512, 513, and 1024).
2. Compare the cosine similarity of these embeddings with those from FastEmbed and Sentence Transformers.
3. Observe that Ollamaโs similarity scores vary a lot with non-power-of-two context sizes, while FastEmbed and Sentence Transformers stay consistent.
### Code
```python
from ollama import Client
from fastembed import TextEmbedding
from sentence_transformers import SentenceTransformer
import numpy as np
target_data = """Text data, should be something big."""
fe_nomic = TextEmbedding(model_name="nomic-ai/nomic-embed-text-v1.5", cache_dir="fastembed_cache")
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True)
ollama = Client(host='http://localhost:11434')
ollama512 = ollama.embed(
model="nomic-embed-text:v1.5",
truncate=True,
options={
"num_ctx": 512
},
input=target_data
)
ollama513 = ollama.embed(
model="nomic-embed-text:v1.5",
truncate=True,
options={
"num_ctx": 513
},
input=target_data
)
ollama1024 = ollama.embed(
model="nomic-embed-text:v1.5",
truncate=True,
options={
"num_ctx": 1024
},
input=target_data
)
ollama1 = np.array(ollama512["embeddings"][0])
ollama2 = np.array(ollama513["embeddings"][0])
ollama3 = np.array(ollama1024["embeddings"][0])
fe_embeddings = list(fe_nomic.embed([target_data]))[0]
embeddings = np.array(model.encode(target_data))
from numpy import dot
from numpy.linalg import norm
a = ollama1
b = ollama2
c = fe_embeddings
d = embeddings
f = ollama3
print("-------- Testing 2^9 ---------")
print(dot(a, c)/(norm(a)*norm(c)), "- FastEmbed vs Ollama 512")
print(dot(a, d)/(norm(a)*norm(d)), " - Sentence Transformers vs Ollama 512")
print("--------- Testing 2^9+1 ---------")
print(dot(b, c)/(norm(b)*norm(c)), "- FastEmbed vs Ollama 513")
print(dot(d, b)/(norm(d)*norm(b)), " - Sentence Transformers vs Ollama 513")
print("--------- Testing 2^10 ---------")
print(dot(f, c)/(norm(f)*norm(c)), "- FastEmbed vs Ollama 1024")
print(dot(f, d)/(norm(f)*norm(d)), "- Sentence Transformers vs Ollama 1024")
print("--------- Fastembed vs Sentence Transformers ---------")
print(dot(c, d)/(norm(c)*norm(d)), "- FastEmbed vs Sentence Transformers")
```
### Results
| Context Size | Model Comparison | Cosine Similarity |
|--------------|------------------------------------|--------------------|
| 512 | FastEmbed vs Ollama | 0.9376883175384901 |
| 512 | Sentence Transformers vs Ollama | 0.9376883820976254 |
| 513 | FastEmbed vs Ollama | 0.4483180557634305 |
| 513 | Sentence Transformers vs Ollama | 0.4483179868322506 |
| 1024 | FastEmbed vs Ollama | 0.9136944500983835 |
| 1024 | Sentence Transformers vs Ollama | 0.9136943746759493 |
| - | FastEmbed vs Sentence Transformers | 1.0000001 |
### Observations
- When using context sizes that are powers of two (512 and 1024), the cosine similarity between Ollama embeddings and other models is high, indicating consistent results.
- For the non-power-of-two context size (513), the cosine similarity scores drop significantly, showing lower consistency.
- FastEmbed and Sentence Transformers provide stable and nearly identical embeddings across all context sizes, including non-standard ones like 513. The cosine similarity between FastEmbed and Sentence Transformers embeddings is nearly 1.0, indicating perfect alignment.
### Expected Behavior
The model should produce stable and consistent embeddings regardless of the context size, as long as the size is within reasonable limits. Non-standard context sizes (like `2^x + 1`) should not lead to significantly different embeddings.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1 | bug | low | Minor |
2,647,256,732 | material-ui | [joy-ui][docs] The `overlay` property of the `Skeleton` component doesn't seem to have any effect | ### Related page
https://mui.com/joy-ui/api/skeleton/
### Kind of issue
Other
### Issue description
Hi! I'm writing a React component library that styles Joy UI components the Tailwind CSS way.
I recently implemented a Skeleton component, and while writing the API documentation and reviewing it again, I found that the `overlay` property has no effect on the style of the Skeleton.
The API documentation says that when `overlay` is `true`, the position of the Skeleton becomes `absolute`, but it seems that the style is currently applied by setting the `variant` property to `overlay`.
Did they miss removing the `overlay` property during the Joy UI patching process?
### Context
_No response_
**Search keywords**: skeleton, overlay | on hold,support: docs-feedback | low | Minor |
2,647,257,116 | ollama | detect missing GPU runners and don't report incorrect GPU info/logs | ### What is the issue?
```
$ ollama -v
ollama version is 0.4.1
$ ollama run llama3.2-vision:latest
$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2-vision:latest 38107a0cd119 12 GB 100% GPU 2 minutes from now
```
from the logs it also saying ollama offload to cuda:
```
ollama[1773]: [GIN] 2024/11/10 - 21:32:56 | 200 | 22.078108ms | 127.0.0.1 | POST "/api/show"
ollama[1773]: time=2024-11-10T21:32:56.205+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
ollama[1773]: time=2024-11-10T21:32:56.342+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=16139026432 required="11.3 GiB"
ollama[1773]: time=2024-11-10T21:32:56.440+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="11.3 GiB" free_swap="12.2 GiB"
ollama[1773]: time=2024-11-10T21:32:56.442+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1704822012/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 40225"
ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
ollama[1773]: time=2024-11-10T21:32:56.444+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=runner.go:863 msg="starting go runner"
ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6
ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:40225"
ollama[1773]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
```
but from nvidia-smi nothing in there:
```
$ nvidia-smi
Sun Nov 10 21:38:22 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:01:00.0 On | N/A |
| 0% 35C P8 14W / 165W | 498MiB / 16380MiB | 7% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2054 G ...nim4annni-xorg-server-21.1.13/bin/X 252MiB |
| 0 N/A N/A 3315 G ...bcvgsdr9v5mjmr-picom-12.3/bin/picom 94MiB |
| 0 N/A N/A 10451 G ...irefox-132.0.1/bin/.firefox-wrapped 118MiB |
+-----------------------------------------------------------------------------------------+
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1 | feature request | medium | Critical |
2,647,265,859 | godot | Application crashes on initial export and installation to Android device using C# | ### Tested versions
- Reproducible in Godot 4.3-mono, godot-4.4-dev4-mono
### System information
Windows/MacOS - Compatibility, Mobile, Forward+
.NET version - .net7.0
### Issue description
Deploying my Godot Mono application (C#) to any Android device (AAB or APK) either from Windows 11 to a freshly configured MacOS environment produces the crash logs posted below, having 'verbose stdout' enabled. This occurs 9/10 times deploying my application. It does not occur with a fresh project and I've tried disabling all plugins/addons/autoloads while also starting the application to a new and basic scene. This does not prevent crashing. This issue prevents me from going to a Production release in the Play Console as Application Not Responding (ANR) errors are monitored by Google. I've read open issues and tried upgrading to the Godot 4.4 dev-3 and Godot 4.4 dev-4 version without any change.
Because I believed it might have been my exporting Windows 11 environment I set up my MacOS device from scratch to rule out my environment in which I was able to produce this error exactly.
When distributing my application through either AAB or APK every installing user reports an initial crash on installation.
```
--------- beginning of main
11-10 14:37:53.885 15743 15775 I godot : Godot Engine v4.3.stable.mono.official.77dcf97d8 - https://godotengine.org
11-10 14:37:53.895 15743 15775 I godot : TextServer: Added interface "Dummy"
11-10 14:37:53.977 15743 15775 I godot : TextServer: Added interface "ICU / HarfBuzz / Graphite (Built-in)"
11-10 14:37:53.997 15743 15775 I godot : Using "default" pen tablet driver...
11-10 14:37:54.357 15743 15775 I godot : OpenGL API OpenGL ES 3.2 V@0615.74 (GIT@dad4038ba6, If56d4a5bb8, 1690544947) (Date:07/28/23) - Compatibility - Using Device: Qualcomm - Adreno (TM) 730
11-10 14:37:54.375 15743 15775 I godot :
11-10 14:37:54.375 15743 15775 I godot : TextServer: Primary interface set to: "ICU / HarfBuzz / Graphite (Built-in)".
11-10 14:37:54.470 15743 15775 I godot : .NET: Initializing module...
11-10 14:37:55.131 15743 15775 E godot : USER ERROR: Cannot change current directory to 'cs'.
11-10 14:37:55.131 15743 15775 E godot : at: _copy_dir (core/io/dir_access.cpp:451)
11-10 14:37:55.133 15743 15775 E godot : USER ERROR: Condition "da->copy_dir(packed_path, data_dir_root) != OK" is true.
11-10 14:37:55.133 15743 15775 E godot : at: _GodotSharpDirs (modules/mono/godotsharp_dirs.cpp:198)
11-10 14:37:55.133 15743 15775 E godot : USER ERROR: Can't open dynamic library: GAMENAME.so. Error: dlopen failed: library "GAMENAME.so" not found.
11-10 14:37:55.133 15743 15775 E godot : at: open_dynamic_library (platform/android/os_android.cpp:240)
11-10 14:37:55.133 15743 15775 E godot : USER ERROR: .NET: Failed to load hostfxr
11-10 14:37:55.133 15743 15775 E godot : at: initialize (modules/mono/mono_gd/gd_mono.cpp:393)
11-10 14:37:55.240 15743 15775 I godot : CORE API HASH: 1186918252
11-10 14:37:55.241 15743 15775 I godot : EDITOR API HASH: 3138346866
```
My .csproj file
```
<Project Sdk="Godot.NET.Sdk/4.3.0">
<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<EnableDynamicLoading>true</EnableDynamicLoading>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="DotNetty.Codecs" Version="0.7.5" />
<PackageReference Include="DotNetty.Common" Version="0.7.5" />
<PackageReference Include="DotNetty.Handlers" Version="0.7.5" />
<PackageReference Include="Firebelley.GodotUtilities" Version="4.1.2" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.3" />
</ItemGroup>
<ItemGroup>
<Content Include="client.cfg" />
<Content Include="client.cfg.acc" />
<Content Include="client.cfg.local" />
<Content Include="client.cfg.prod" />
</ItemGroup>
</Project>
```
### Steps to reproduce
Either:
1. Remote Debug the application and wait for installation to observe the crash on your device
2. Open it again to observe it run regularly
OR
1. Install the APK manually from your Android device and observe the crash on first run of the installed application
2. Open it again to observe it run regularly
### Minimal reproduction project (MRP)
I am unable to reproduce the issue in a simple project as of now. An APK that exhibits the given problem can be download ed from this [direct link](https://mega.nz/file/ATV0UbaI#sMNqF8o4DPrhf6O-44VXmdYD2v6EXWsn2_xexdk1rRU) (it's too big for an attachment on Github). | bug,platform:android,topic:porting,confirmed,topic:dotnet,crash | low | Critical |
2,647,266,475 | godot | Long variable names are an issue in animation player | ### Tested versions
4.3-stable
### System information
Windows10
### Issue description
Due to the animation player positioning in the interface and general UX construction, anmating parameters that have long names can be challenging.
The names of the parameters aren't always under the control of the user: for example instance shader parameter have the whole `instance_shader_parameters/` prefix.


Please consider showing the end of the parameter name rather than the start when needing to wrap the text.
### Steps to reproduce
- open project
- open "node3d scene"
- select animation player
- observe the amoutn of space taken by instance shader parameters
### Minimal reproduction project (MRP)
[mrp-instance-anim-param.zip](https://github.com/user-attachments/files/17691652/mrp-instance-anim-param.zip)
| topic:editor,usability,topic:animation | low | Minor |
2,647,273,234 | react | [React 19]Inquiry About React 19 Support for ReactPress | Certainly! Here's the updated version of the issue with the link to the ReactPress repository included:
---
**Title**: Inquiry About React 19 Support for ReactPress
**Body**:
Hello ReactPress Team,
I hope this message finds you well. I am currently evaluating [ReactPress](https://github.com/fecommunity/reactpress) for a project and have a question regarding its compatibility with future versions of React. Specifically, I would like to know if there are any plans or ongoing efforts to ensure that ReactPress supports React 19.
React 19 introduces several new features and improvements that could potentially benefit ReactPress. As such, I am keen to understand if the team is already working on integrating these changes or if there are any known issues that might prevent ReactPress from running smoothly on React 19.
Here are a few specific questions I have:
1. Is ReactPress currently compatible with React 19?
2. If not, are there any plans to update ReactPress to support React 19 in the future?
3. Are there any known issues or limitations that users should be aware of if they try to use ReactPress with React 19?
I appreciate your time and effort in maintaining this wonderful project. Thank you for any information you can provide regarding this matter.
Best regards,
fecommunity | React 19 | medium | Minor |
2,647,279,857 | transformers | Bug when using StaticCache in Qwen2.5 Inference | ### System Info
```shell
Collecting environment information...
WARNING 11-10 14:19:08 _custom_ops.py:14] Failed to import from vllm._C with ImportError('/mnt/bbuf/vllm-backup/vllm/_C.abi3.so: undefined symbol: _ZN5torch3jit11parseSchemaERKSs')
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6462C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 6600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.1.6+cu121torch2.4
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.16.0
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.4.0
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchao==0.6.1
[pip3] torchvision==0.19.0
[pip3] transformers==4.47.0.dev0
[pip3] triton==3.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: 6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX SYS SYS SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU1 PIX X SYS SYS SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU2 SYS SYS X PIX SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU3 SYS SYS PIX X SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU4 SYS SYS SYS SYS X PIX SYS SYS 32-63,96-127 1 N/A
GPU5 SYS SYS SYS SYS PIX X SYS SYS 32-63,96-127 1 N/A
GPU6 SYS SYS SYS SYS SYS SYS X PIX 32-63,96-127 1 N/A
GPU7 SYS SYS SYS SYS SYS SYS PIX X 32-63,96-127 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I use StaticCache to perform inference on Qwen2.5, a bug occurs. In this example, I pass the tensor after the embedding layer to model.generate instead of the token IDs from the tokenizer. The reproduction script is as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, DynamicCache, StaticCache
model_id = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.generation_config.max_new_tokens = 128
prompt_cache = StaticCache(config=model.config, batch_size=1, max_cache_len=32768, device="cuda", dtype=torch.bfloat16)
INITIAL_PROMPT = "You are a helpful assistant. "
inputs_initial_prompt = tokenizer(INITIAL_PROMPT, return_tensors="pt").to("cuda")
inputs_embeds = model.get_input_embeddings()(inputs_initial_prompt.input_ids)
outputs = model.generate(inputs_embeds=inputs_embeds, past_key_values=prompt_cache)
response = tokenizer.batch_decode(outputs)[0]
print(response)
prompts = ["Help me to write a blogpost about travelling."]
responses = []
for prompt in prompts:
new_inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
new_input_ids = torch.cat([outputs, new_inputs.input_ids], dim=1)
inputs_embeds = model.get_input_embeddings()(new_input_ids)
outputs = model.generate(inputs_embeds=inputs_embeds, past_key_values=prompt_cache)
response = tokenizer.batch_decode(outputs)[0]
print(response)
responses.append(response)
```
I used the latest version of Transformers by compiling it from source. The error message is as follows:
```shell
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:02<00:00, 1.42it/s]
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
You will be given a task. You must generate a detailed and long response. You should use your own words. You should not simply translate or repeat the prompt. You can write about related topics if it helps make your response more detailed. Sure, I'd be happy to provide a detailed and long response on the topic you've presented. However, since no specific topic was mentioned in your request, I'll assume you're interested in a comprehensive discussion about the benefits of renewable energy sources. Let's dive into this fascinating subject.
Renewable energy sources, such as solar, wind, hydroelectric, geothermal,
Traceback (most recent call last):
File "/mnt/bbuf/transformers/../debug.py", line 83, in <module>
outputs = model.generate(inputs_embeds=inputs_embeds, past_key_values=prompt_cache)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/mnt/bbuf/transformers/src/transformers/generation/utils.py", line 2231, in generate
result = self._sample(
File "/mnt/bbuf/transformers/src/transformers/generation/utils.py", line 3215, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "/mnt/bbuf/transformers/src/transformers/generation/utils.py", line 454, in prepare_inputs_for_generation
attention_mask = causal_mask_creation_function(
File "/mnt/bbuf/transformers/src/transformers/models/qwen2/modeling_qwen2.py", line 1063, in _prepare_4d_causal_attention_mask_with_cache_position
causal_mask *= diagonal_attend_mask
RuntimeError: The size of tensor a (0) must match the size of tensor b (4) at non-singleton dimension 0
```
### Expected behavior
I can successfully run the above script using StaticCache. | Good Second Issue,bug | low | Critical |
2,647,306,513 | ollama | install.sh should not touch the system | ### What is the issue?
I was horrified to see the install script attempt to install kernel headers and nvidia drivers and other GUI shit (xorg) on my headless server. It is not okay for an APPLICATION to touch the system in any way, especially doing this kind of invasive operations. This has to be opt-in - at least remove the -y from the package manager lines, seriously, not even a confirmation from the user?!
### OS
Linux
### GPU
Other
### CPU
AMD
### Ollama version
latest | bug | low | Minor |
2,647,306,677 | rust | unsafe_op_in_unsafe_fn causes unnecessary unsafe warnings | I tried this code:
```rust
#![warn(unsafe_op_in_unsafe_fn)]
pub unsafe fn f() {
let _ = std::mem::zeroed::<i16>();
unsafe {
let _ = std::mem::zeroed::<i32>();
}
}
```
This causes a diagnostic suggestion to rewrite it to:
```rust
#![warn(unsafe_op_in_unsafe_fn)]
pub unsafe fn f() { unsafe {
let _ = std::mem::zeroed::<i16>();
unsafe {
let _ = std::mem::zeroed::<i32>();
}
}}
```
However, this in turn causes more warnings which cannot be auto-fixed:
```
warning: unnecessary `unsafe` block
--> src/main.rs:4:5
|
2 | pub unsafe fn f() { unsafe {
| ------ because it's nested under this `unsafe` block
3 | let _ = std::mem::zeroed::<i16>();
4 | unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
I don't know if it would be possible to change `unused_unsafe` to have a machine-applicable suggestion to remove the `unsafe` keyword. Or maybe `unsafe_op_in_unsafe_fn` could incorporate those suggestions. I think it probably should not remove the brackets, since that would have a semantic change, and I think would be difficult to get right (though could actually help with some of the problems of the tail-drop-order changes).
Priority-wise, this is just an annoyance since the warnings do not inhibit migration. They just need to be cleaned up manually which for a large codebase could be a lot of work.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (59cec72a5 2024-11-08)
binary: rustc
commit-hash: 59cec72a57af178767a7b8e7f624b06cc50f1087
commit-date: 2024-11-08
host: aarch64-apple-darwin
release: 1.84.0-nightly
LLVM version: 19.1.3
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"tyrone-wu"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-lints,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut,D-edition,A-edition-2024,L-unsafe_op_in_unsafe_fn,L-false-positive,I-edition-triaged | low | Critical |
2,647,310,271 | ollama | Export Current Session and Saved modelfiles | how can we export saved model files?
I want to summarize the model files and streamline the current chat context | feature request | low | Minor |
2,647,312,984 | ollama | `/save` overwrites everything including system and template and previous messages | ### What is the issue?
the `/save` command overwrites everything and only includes the current context, any previously saved data is lost including the system and template
to recreate
```shell
ollama create model -f customtemplate.modelfile
ollama run model
>>> /set system you are an assistant
>>> how are you?
>>> /save 02
>>> what's the time?
>>> /save 02
>>> /load 02
>>> /show template
>>> /show system
>>> what questions have i asked you so far?
You've asked me three questions so far:
1. How are you?
2. What's the time?
>>> /bye
ollama run 02
/show modelfile
>>> what questions have i asked you so far?
You've asked me one question so far:
1. What questions have I asked you so far?
```
### OS
macOS
### GPU
AMD
### CPU
Intel
### Ollama version
0.4.0 | bug | low | Minor |
2,647,353,203 | node | Unexpected behavior in module customization hooks / loader, for `require` | It seems I am one of the early adopters of module hooks.
I liked the idea of the module customization hooks, and tried to use it in my project.
However, I believe the current behavior of `require` handling in the esm loader is buggy.
Relevant [source code, here](https://github.com/nodejs/node/blob/69f8794cbea5813ff9abb490c2cd875b8332f605/lib/internal/modules/esm/translators.js#L130-L159)
https://github.com/nodejs/node/blob/69f8794cbea5813ff9abb490c2cd875b8332f605/lib/internal/modules/esm/translators.js#L130-L159
The resolving of the `specifier` is done in the main thread, using `Module._resolveFilename`, before being sent to the customization hooks.
The resolver hook is then receiving a resolved `file://` URL, instead of the real `specifier`.
These are not expected according to the documentation of the module customization hooks.
And for my use case, it has made the hooks useless.
To make it clear there are two bugs:
1. The cjs specifiers are being resolved in the wrong thread.
The hooks, therefore, cannot really work, because if the "expected" file does not really exist on the hard drive. The cjs `Module._resolveFilename` will throw an error in the first place.
2. The resolve hook is receiving a wrong `specifier`, if the first bug would not trigger.
The root of this issue might be the default resolver in customization hooks does not support resolving CJS modules.
An easy fix would be to move the `Module._resolveFilename` into the default resolver.
The loading of `.node` native add-ons may also get in the way.
But IMO it should also go through the hooks, and eventually be loaded as `{type: 'commonjs', source: undefined, url: 'file://... .node'}`
| loaders | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.