id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,659,904,814 | deno | Pass Dprint options directly from deno.json during deno fmt | The Deno doc [here](https://docs.deno.com/runtime/fundamentals/linting_and_formatting/#formatting) says,
> The formatter can be configured in a [deno.json](https://docs.deno.com/runtime/fundamentals/configuration/) file. You can specify custom rules, plugins, and settings to tailor the formatting process to your needs.
but never says how. But it says the underlying formatter is Dprint, so presumably the above can be accomplished with the Dprint options documented [here](https://dprint.dev/plugins/typescript/config/).
But if you add any of those to deno.json, `deno fmt` complains.
This needlessly precludes the use of `deno fmt` for anyone who doesn't like the options it hard-codes outside the tiny subset that's allowed. For example, I can't prevent the formatter from messing up my braces and flow-control statements. But Dprint offers the exact options I want:
```
"bracePosition": "nextLine",
"nextControlFlowPosition": "nextLine"
``` | suggestion,deno fmt | low | Minor |
2,659,934,221 | tailwindcss | @property isn't supported in shadow roots | **What version of Tailwind CSS are you using?**
For example: `4.0.0-alpha.34`
**What build tool (or framework if it abstracts the build tool) are you using?**
Web components with shadow roots.
**What version of Node.js are you using?**
v22.2.0
**What browser are you using?**
Chrome
**What operating system are you using?**
macOS
**Reproduction URL**
https://github.com/blittle/tw-shadow
**Describe your issue**
Tailwind v4 uses `@property` to define defaults for custom properties. At the moment, shadow roots do _not support_ `@property`. It used to be explicitly denied in the spec, but it looks like there's talk on adding it: https://github.com/w3c/css-houdini-drafts/pull/1085
I don't know if this is something tailwind should fix, but it took me a while to find the issue, so it's probably worth keeping this issue to document the limitation.
Here is a work-around, [just attaching the `@property` definitions to the base document](https://benfrain.com/using-css-property-inside-shadowroot-web-components-workaround/). It would be nice if tailwind provided an easy way to import just that content.
An easy way to do that with Vite is to create a tailwind css file specifically for the properties and apply a transform:
```js
export default defineConfig(() => {
return {
...
plugins: [
tailwindcss(),
{
name: "tailwind-properties",
transform(code, id) {
if (id.endsWith("tailwind-properties.css?inline")) {
// Change custom properties to inherit
code = code.replaceAll("inherits: false", "inherits: true");
// Remove everything before the property declarations
code = code.substring(code.indexOf("@property"));
return code;
}
},
},
],
};
});
``` | v4 | low | Minor |
2,659,954,268 | react | [Compiler Bug]: Incorrect "Writing to a variable defined outside a component or hook is not allowed." error | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEwBUYCAgllgQL4EBmMEGBA5GgHYDWrA3ADodMOfERJkAwgEMANjIBGUuFwA04hBIgc8CAB6E6jZmxgIlePoOG5CxUggCiMJjABC0DgBMpMAJ60GJhZWU3MAWgRnXDD5D28-Sw4rbBsxLCYAcxgpDE1tPQNA41YAOgB6Th0YDlkwEoArMETBMoAqVsECVoJyAgAlMzhCAAsICC4GXAIlRDAwTgyiAAEZTgmABUzsjDpOMDwpDkRprwIAAz00PDOGKCO8NC0CPGGpQjQwdU9JmAIwUYA7gsTgRIi4CAAKABuaCkILBU1id3ivgAlCdvpcHhwMiVOq0yoI9CJCPQ7kNHhx1JsIFkchD0cBOgQZAhbH9Ae5kT5-HQALzqJwuLleHkMgRUllssRY5DTKgOXRXAIC+yULDi5ms9npWnbFXqPI6fQQ3V03JaY14VESrXSrEEVWSWQKJRcCHMgiQhEwOV3LgcCAAjjovkAPiInq9BF1szqWM0ngQjoIAEZbZLoxygyKURCfTao16pAqlXh81EYIXM7QVFGANolrCKq5qf45uI8gC6dcl1eZpjwsCpxDN2zUDpoEpoghANCAA
### Repro steps
1. Run ESLint with eslint-plugin-react-compiler enabled on the code from the repro and observe an error "Writing to a variable defined outside a component or hook is not allowed. Consider using an effect".
The error is caused by using `process.exitCode = 1` inside a `useCallback` hook, which does seem like a false positive. Or am I missing something? Thank you.
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
eslint-plugin-react-compiler@19.0.0-beta-a7bf2bd-20241110 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,660,011,218 | transformers | RFC: Reducing Download Traffic and Latency with ZipNN Lossless Compression for AI Models | ### Feature request
This RFC proposes integrating a lossless compression method called ZipNN into Hugging Face Transformers to reduce latency and traffic for downloading models. ZipNN is specifically designed for AI models, offering a model size reduction of 17% to over 50%, depending on the model format and compressibility. Additionally, it significantly reduces time for the user due to its fast decompression speed, allowing compressed models to be ready for use almost immediately without impacting model accuracy.
### Motivation
From a [LinkedIn post by Julien Chaumond](https://www.linkedin.com/posts/julienchaumond_i-am-super-excited-to-announce-that-weve-activity-7227305609254113280-zx3H), August 2024, Hugging Face holds 1.3M models, with a cumulative storage space of 12PB. They also serve 1 billion daily requests, amounting to a network bandwidth of around 6 PetaBytes per day!
Downloading large models from Hugging Face can be time-consuming; for example, downloading a model like Llama-3.1-405B can take nearly a day on a 10 MB/s home connection or nearly 2 hours on a 125 MB/s high-bandwidth connection. ZipNN could reduce this time by up to 33%.
## Model Comparison Table
We took the 20 most downloaded models in Hugging Face from late OCT 2024:
(Based on 1GB from the middle of the model).
- **17% Savings: 9 models**
- **33% Savings: 3 models**
- **50% or Greater Savings: 8 models**
| Model Name | Format | Size | ZipNN Compression Remaining (%) |
|:-----------|:-------|:-----|:--------------------------|
| BAAI/bge-base-en-v1.5 | FP32 | 0.4GB | 42.2% |
| sentence-transformers/all-mpnet-base-v2 | FP32 | 0.4GB | 83% |
| nesaorg/benchmark_v0 | FP32 | 1.35GB | 82.38% |
| google-bert/bert-base-uncased | FP32 | 0.4GB | 83.17% |
| sentence-transformers/all-MiniLM-L6-v2 | FP32 | 0.09GB | 82.07% |
| Qwen/Qwen2.5-1.5B-Instruct | BF16 | 3GB | 66.86% |
| openai/whisper-large-v2 | FP32 | 6.1GB | 42.8% |
| FacebookAI/xlm-roberta-large | FP32 | 2.2GB | 42.9% |
| 1231czx/llama3_it_ultra_list_and_bold500 | BF16 | 16GB | 66.77% |
| openai/clip-vit-base-patch32 | FP32 | 0.6GB | 43% |
| jonatasgrosman/wav2vec2-large-xlsr-53-english | FP32 | 1.26GB | 82.96% |
| openai/clip-vit-base-patch16 | FP32 | 0.6GB | 51.24% |
| google/vit-base-patch16-224-in21k | FP32 | 0.4GB | 84% |
| FacebookAI/roberta-base | FP32 | 0.5GB | 43.9% |
| nesaorg/fc_8 | FP32 | 0.13GB | 82.52% |
| nesaorg/fc_6 | FP32 | 0.1GB | 82.2% |
| BAAI/bge-small-en-v1.5 | FP32 | 0.13GB | 42.9% |
| openai/clip-vit-large-patch14 | FP32 | 1.71GB | 42.97% |
| timm/resnet50.a1_in1k | FP32 | 0.1GB | 83.51% |
| meta-llama/Llama-3.1-405B | BF16 | 812GB | 66% |
### Your contribution
# ZipNN
ZipNN (The NN stands for Neural Network) is a lossless compression library tailored to neural networks. ZipNN compresses models by targeting the skewed distribution of exponent bits in floating-point parameters, which is highly compressible. By isolating exponents and applying Entropy Encoding with Huffman codes, ZipNN achieves efficient compression without the overhead of multi-byte repetition algorithms like Lempel-Ziv. It further optimizes speed by skipping non-compressible segments and adapting strategies based on the model’s characteristics.
[ZipNN Repository Link](https://github.com/zipnn/zipnn)
[ZipNN arXiv Paper: ZIPNN: LOSSLESS COMPRESSION FOR AI MODELS](https://arxiv.org/abs/2411.05239)
## Comparing Speed and Compression ratio of different compression methods:
(Based on 1GB from the middle of the model).
| Model Name | Format | Compression Method | Compression Remaining (%) | Compression Speed (GB/Sec) | Decompression Speed (GB/Sec) |
|:-----------|:-------|:------------------|:-------------------|:--------------------------|:----------------------------|
| meta-llama/Llama-3.1-8B-Instruct | BF16 | Zstd | 77.7% | 0.71 | 1.02 |
| meta-llama/Llama-3.1-8B-Instruct | BF16 | ZipNN | 66.4% | 1.15 | 1.65 |
| allenai/OLMo-1B-0724-hf | FP32 | Zstd | 92.3% | 0.97 | 1.02 |
| allenai/OLMo-1B-0724-hf | FP32 | ZipNN | 83.2% | 1.64 | 2.48 |
| FacebookAI/xlm-roberta-large | FP32 | Zstd | 57.4% | 0.18 | 0.77 |
| FacebookAI/xlm-roberta-large | FP32 | ZipNN | 42.9% | 0.83 | 1.41 |
## User benefits
Figure 10 in the arXiv paper shows the download and upload timing for three models, comparing the original and compressed versions, including decompression and compression times. Network speed is the primary factor affecting download and upload durations, and even for models that are less compressible, users benefit from reduced total latency when decompression and compression are included.
[Link to Figure 10 from the arXiv paper](https://github.com/zipnn/zipnn/blob/main/images/hf_download_upload_2.pdf)
## Usage
### Installation
To get started, you can install the library directly from PyPI:
```bash
pip install zipnn
```
### API Usage
You can call ZipNN directly from the API:
```python
import zipnn
zpn = zipnn.ZipNN()
compressed_buffer = zpn.compress(original_buffer)
decompressed_buffer = zpn.decompress(compressed_buffer)
```
### Command-Line Scripts
You can also use the provided wrapper [scripts](https://github.com/zipnn/zipnn/tree/main/scripts).
Note: **All ZipNN compressed files use the ".znn" extension**.
Single file compression/decompression:
```bash
python zipnn_compress_file.py model_name
python zipnn_decompress_file.py compressed_model_name.znn
```
## Hugging Face Plugin and compressed Models stored on Hugging Face
### Plugin Usage
ZipNN has a plugin for the Hugging Face transformers library that can handle ZipNN-compressed Models.
The user can save the compressed model to his local storage using the default plugin. When loading, the model includes a fast decompression phase on the CPU while remaining compressed in its storage.
**What this means:** Each time the user loads the model, less data is transferred to the GPU cluster, with decompression happening on the CPU.
```python
from zipnn import zipnn_hf
zipnn_hf()
```
**Alternatively, avoiding future decompression**: the user can save the model uncompressed on his local storage. This way, future loads won’t require a decompression phase
```python
from zipnn import zipnn_hf
zipnn_hf(replace_local_file=True)
```
To compress and decompress manually, simply run: [Link to scripts](https://github.com/zipnn/zipnn/tree/main/scripts)
```bash
python zipnn_compress_path.py safetensors --model royleibov/granite-7b-instruct-ZipNN-Compressed --hf_cache
```
```bash
python zipnn_decompress_path.py --model royleibov/granite-7b-instruct-ZipNN-Compressed --hf_cache
```
There are a few models compressed by ZipNN hosted on Hugging Face:
Example:
[ compressed FacebookAI/roberta-base ]( https://huggingface.co/royleibov/roberta-base-ZipNN-Compressed )
[ compressed meta-llama/Llama-3.2-11B-Vision-Instruct ]( https://huggingface.co/royleibov/Llama-3.2-11B-Vision-Instruct-ZipNN-Compressed )
And a usage example:
[Usage Example Llama-3.2-11B](https://github.com/zipnn/zipnn/blob/main/examples/huggingface_llama_3.2_example.py)
### Upload compressed models to Hugging Face:
1. Compress all the model weights
Download the scripts for compressing/decompressing AI Models:
```bash
wget -i https://raw.githubusercontent.com/zipnn/zipnn/main/scripts/scripts.txt &&
rm scripts.txt
```
```bash
python3 zipnn_compress_path.py safetensors --path .
```
2. Add the compressed weights to git-lfs tracking and correct the index json
```
git lfs track "*.znn" &&
sed -i 's/.safetensors/.safetensors.znn/g' model.safetensors.index.json &&
git add *.znn .gitattributes model.safetensors.index.json &&
git rm *.safetensors
```
3. Done! Now push the changes as per [the documentation](https://huggingface.co/docs/hub/repositories-getting-started#set-up):
```bash
git lfs install --force --local && # this reinstalls the LFS hooks
huggingface-cli lfs-enable-largefiles . && # needed if some files are bigger than 5GB
git push --force origin main
```
## Current status
The code is ready for use with single-threaded compression and decompression on the CPU, and ZipNN already has a few users. The next version will support multi-threading on the CPU, with a future milestone targeting GPU implementation.
# Proposed change:
Decompress any shard of a model that was previously compressed with ZipNN. [This commit](https://github.com/huggingface/transformers/commit/607982fea2ff9cb381d1038adc6bd22c1fe58267) only extends the functionality of load_state_dict(), making sure to load the model and decompress it as efficiently as possible by decompressing in chunks and by avoiding unnecessary I/O requests.
In modeling_utils.load_state_dict():
```python
checkpoint_bytes = b""
if checkpoint_file.endswith(".znn"):
output_file = checkpoint_file.replace(".znn", "")
if not os.path.exists(output_file):
try:
from zipnn import ZipNN
except ImportError:
raise ImportError("To load a zipped checkpoint file, you need to install zipnn.")
znn = ZipNN(is_streaming=True)
with open(checkpoint_file, "rb") as infile:
chunk = infile.read()
checkpoint_bytes += znn.decompress(chunk)
else:
with open(output_file, "rb") as infile:
checkpoint_bytes += infile.read()
```
**This is a proof of concept**, currently only supporting sharded models whose index.json been modified to .znn suffixes (as seen in this [ZipNN compressed Llama 3.2 example on Hugging Face](https://huggingface.co/royleibov/Llama-3.2-11B-Vision-Instruct-ZipNN-Compressed/blob/main/model.safetensors.index.json)), safetensors or any other file. Support for all single files can be readily added by adding individual checks in modeling_utils.PreTrainedModel.from_pretrained() or by changing utils.hub.cached_file() to check for .znn filepath.
A working version of all edge cases can be found in ZipNN's [zipnn_hf() plugin](https://github.com/zipnn/zipnn/blob/ffa5b9f6d2a55fb2b2fd460995fc81e1283d0954/zipnn/zipnn.py#L1081).
**Additionally, to allow for users to only decompress once**, the plugin has a flag `zipnn_hf(replace_local_file=True)` that locally saves the decompressed model in the cache, reorders the symlinks, and fixes accordingly any index.json if there is one. This functionality can be done equivalently by adding a flag in from_pretrained().
| Discussion,Feature request | low | Critical |
2,660,019,106 | vscode | Giving files as context to Copilot autocompletion | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I'd love it if it was possible to give files as context to the Copilot autocompletion, like you can for chat. I've found the options around [customizing the prompt](https://code.visualstudio.com/docs/copilot/copilot-settings#_customize-copilot-prompts) but my use case is that a lot of individual files in my codebase have a matching PDF spec, and I'd love to be able to give the spec as context for autocompletion in the individual files.
| feature-request,inline-completions | low | Minor |
2,660,040,648 | deno | deno compile: npm package node-red seems to not work | deno 2.0.6 (stable, release, x86_64-pc-windows-msvc)
v8 12.9.202.13-rusty
typescript 5.6.2
current latest node-red: 4.0.5
# Steps to reproduce:
1. `deno install npm:node-red`
2. `deno run --allow-all npm:node-red --userDir ./nodeRedData` (seems to work fine)
3. delete the ./nodeRedData
4. `deno compile -o nodered.exe --allow-all npm:node-red --userDir ./nodeRedData`
5. `.\nodered.exe`
```powershell
PS F:\programming\deno-compile-test> .\nodered.exe
error: Uncaught (in promise) TypeError: Cannot read properties of null (reading 'getTime')
at Object.<anonymous> (file:///C:/Users/MY_NAME/AppData/Local/Temp/deno-compile-nodered.exe/.deno_compile_node_modules/localhost/node-red/4.0.5/red.js:145:36)
at Object.<anonymous> (file:///C:/Users/MY_NAME/AppData/Local/Temp/deno-compile-nodered.exe/.deno_compile_node_modules/localhost/node-red/4.0.5/red.js:565:4)
at Module._compile (node:module:745:34)
at Object.Module._extensions..js (node:module:762:10)
at Module.load (node:module:662:32)
at Function.Module._load (node:module:534:12)
at file:///C:/Users/MY_NAME/AppData/Local/Temp/deno-compile-nodered.exe/.deno_compile_node_modules/localhost/node-red/4.0.5/red.js:5:32
```
# My expectation
I expect the compile command to behave the same like the run command before. But it just throws an error. | needs investigation,compile,node compat | low | Critical |
2,660,041,717 | svelte | Feat: `$effect.async`, `$derived.async` | ### Describe the problem
There have been several discussions about the issues of runes and `async`, for example, #13916. Many apps depend on async code, and it can become difficult to migrate to Svelte 5 because of this.
### Describe the proposed solution
It would be useful to have two asynchronous runes: `$effect.async` and `$derived.async`. These functions would be asynchronous counterparts to `$effect` and `$derived.by`, respectively. Here's an example:
```js
let fetched = $derived.async(async()=>{
let res = await fetch('./example.json');
return await res.json();
});
$effect.async(async () =>{
let asyncValue = await getAsyncValue();
if(asyncValue === preferredAsyncValue) {
runCallback();
}
});
```
You may notice that these functions do not require `await` to be called to use them, this would be because the `await` would be inserted at compile time:
```js
let thing = $derived.async(asyncValue);
$inspect(thing);
//turns into
let thing = await $.derived_async(asyncValue);
$.inspect_async(async() =>[await $.get_async(thing)]);
```
### Importance
would make my life easier | runes | low | Major |
2,660,061,441 | react-native | Emojis inside <Text> increase line-height or stretch the element on iOS | ### Description
Reported previously in #18559. Emojis within `<Text>` elements are not aligned with other text in the same text element or other text elements on screen. They also cause the height of the `<Text>` to increase disproportionate to the `fontSize`.
As a workaround, the `fontFamily` of the emoji can be set to `System`. The referenced Snack shows this behavior. The emoji is rendered initially with the `Menlo` font family and overridden to `System` when the override button is pressed. When overridden, the strikethrough more closely aligns with the text, though is not perfectly aligned like on Android.
This issue no longer seems to be reproducible on Android.
### Steps to reproduce
1. Launch the Snack
2. Note that the strikethrough does not align with the emoji nor the text to the right
3. Click Override
4. Note that the strikethrough more closely aligns with the emoji and the text on the right
### React Native Version
0.76.1
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7.1
CPU: (16) arm64 Apple M3 Max
Memory: 3.73 GB / 48.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.5
path: /opt/homebrew/opt/node@18/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: /opt/homebrew/opt/node@18/bin/npm
Watchman:
version: 2024.11.11.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: ^15.1.2
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
None
```
### Reproducer
https://snack.expo.dev/@mhoran/hazardous-orange-celery
### Screenshots and Videos
|Without override|With override|
|---|---|
|<img width="307" alt="Screenshot 2024-11-14 at 4 02 20 PM" src="https://github.com/user-attachments/assets/93945819-77c4-432d-882a-3d9c696b4d3d">|<img width="299" alt="Screenshot 2024-11-14 at 4 02 27 PM" src="https://github.com/user-attachments/assets/5425aac6-f001-4c3a-9332-01677f2b921d">|
| Platform: iOS,Needs: Triage :mag:,Newer Patch Available | low | Major |
2,660,071,706 | rust | “Summary” button in docs doesn’t react to description being collapsed | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
If you collapse the description manually the summary button should change to “Show all” since otherwise it’d do nothing
### Meta
Observed on doc.rust-lang.org/nightly
https://github.com/user-attachments/assets/2bf0da56-ec3d-40a8-b237-90f3d813a144
| T-rustdoc,C-bug,A-rustdoc-ui,T-rustdoc-frontend | low | Critical |
2,660,091,892 | next.js | Docs: Should App/ Page router switcher dropdown redirect to associate pages | ### What is the documentation issue?
# Could this be a more clearer explanation
Hi I am sure this is a small nick pick but we are in a progress of migrating from `Page router` to `App router `.
While looking through documentation for [route-segment-config](https://nextjs.org/docs/13/app/api-reference/file-conventions/route-segment-config)
I notice that if you are on a document page that is for `App` router features and then you select the dropdown to `Page` router. It still keep you at the `App` router related document. I think that could cause confusion on to new comers that some of the features from App router could work be in Pages router.
Do you think it would be valuable to redirect you to associate pages between router version switches.
### Is there any context that might help us understand?
I notice that while comparing cache invalidation methods between pages router and the new app router.
Go to https://nextjs.org/docs/13/app/api-reference/file-conventions/route-segment-config
In this example below, you are at App dir related docs

Try switching to Page router dropdown .

It will take you to the same page. Which IMO could cause confusion to new comers.
If this is a issue and is something worth doing please let me know I would love to contribute in some ways.
Thanks.
### Does the docs page already exist? Please link to it.
https://nextjs.org/docs/13/app/api-reference/file-conventions/route-segment-config | linear: docs | low | Minor |
2,660,092,557 | react | [Compiler Bug]: eslint plugin rule only applied to first instance inside a file | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhHCA7MAXABAMwglwF5cAKASlID4LqS7gBfAHQ3YQA8AHCGPABME+AIZQANnnxQMcbAEtMuALIBPAII8eVXMHa5c6LNKIBhURIkAjUXADWpXACUEd7ADooYBBau2HckIIKkoDI0wcXFsYPxs7RzJXdy8fOID7IKJQ9nCYBGxYDFwAHkEFADcaAAkEK2IAdX4JQRKAenKqgG52NgwQZiA
### Repro steps
For the rule 'Expected the first argument to be an inline function expression', it is only applied to the first violation even if there is more than one violation. (Example in playground link.) Moreover, on my setup if I eslint-disable the first violation and then eslint-enable after, then subsequent violations still aren't shown. I couldn't get eslint-disable to work in the playground though.
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
n/a eslint-plugin issue | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,660,143,192 | go | proposal: os: add (*Process).Handle method | ### Proposal Details
This is a continuation of #62654, in particular its item 6, described in there as:
> 6. (Optional) Add `(*Process).Handle() uintptr` method to return process handle on Windows and pidfd on Linux. This might be useful for low-level operations that require handle/pidfd (e.g. pidfd_getfd on Linux), or to ensure that pidfd (rather than pid) is being used for kill/wait.
A similar thing was proposed earlier [here](https://github.com/golang/go/issues/51246#issuecomment-1050117436) by @prattmic:
> A new `os.Process.Fd()` could return the pid FD for additional direct use.
## Proposal
Add a new `Handle` method for `os.Process`, which returns a process handle, if available, and a boolean flag telling if a handle is valid. On Linux, handle is a file descriptor of referring to the process (a pidfd). On Windows, handle is a handle to the process.
```go
func (p *Process) Handle() (uintptr, bool)
```
## Use cases
### 1. Check if pidfd is being used on Linux.
Since Go 1.23, pidfd is used for os.Process-related operations instead of pid, if supported by the Linux kernel. This includes `os.StartProcess` and `os.FindProcess` (they obtain a pidfd), as well as `(*Process).Wait`, `(*Process).Signal`, and `(*Process).Kill` (they use pidfd). The main benefit of pidfd in the use cases above is a guarantee we're referring to the same process (i.e. there's no pid reuse issue).
However, since this is done in a fully transparent way, there is no way for a user to know if pidfd is being used or not. Some programs implement some protection against pid reuse (for example, `runc` and `cri-o` obtain and check process start time from `/proc/<pid>/stat`). They can benefit from being able to know if Go is using pidfd internally.
Another example is containerd which relies on Go 1.23 using pidfd internally, but since there's no way to check they had to recreate all the functionality checking for pidfd support [here](https://github.com/containerd/containerd/blob/main/pkg/sys/pidfd_linux.go) (which is still not 100% correct since the checks are slightly different from those in Go's [checkPidfd](https://github.com/golang/go/blob/672a53def7e94b4d26049c5cd44dda5d7f1a46ff/src/os/pidfd_linux.go#L154), and Go checks may change over time ). Cc @fuweid.
With the proposed interface, a user can easily check if pidfd is being used:
```go
p, err := os.FindProcess()
...
if _, ok := p.Handle(); ok {
// pidfd is used internally by os.Process methods.
}
```
### 2. Obtain a pidfd for additional direct use.
Aside from use cases already covered by existing `os.Process` methods, pidfd can also be used to:
- obtain a duplicate of a file descriptor of another process ([pidfd_getfd(2)](https://man7.org/linux/man-pages/man2/pidfd_getfd.2.html));
- select/poll/epoll on a pidfd to know when a process is terminated;
- move into one or more of the same namespaces as the process referred to by the file descriptor ([setns(2)](https://man7.org/linux/man-pages/man2/setns.2.html)).
Other use cases may emerge in the future.
Currently, the only way to obtain a pidfd on Linux is to execute a new process (via `os.StartProcess` or `os/exec`) with process' `Attr.SysAttr.PidFD` field set. This works if we're starting the process, but not in any other case (someone else starts a process for us, or it is already running).
## Questions
#### 1. What are (could be) the additional direct use cases of Windows process handle?
A few are [listed here](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-openprocess). Apparently some git grep (GetPriorityClass, SetPriorityClass, AssignProcessToJobObject) are implemented in golang.org/x/sys/windows.
#### 2. Should a duplicate of a handle be returned, or the original handle?
Return the original one, ensuring that `Handle` documentation describes when pidfd may become invalid.
Arguments against duplicated handle:
- a duplicated pidfd makes the "check if pidfd is being used" use case above more complicated as the user will need to close the returned pidfd;
- a user can always use dupfd if/when needed;
- returned handle won't leak as it is still part of os.Process struct, which have a `Release` method and a proper finalizer;
- [os.File.Fd](https://pkg.go.dev/os#File.Fd) returns the original underlying fd, pidfd is similar.
Arguments for duplicated handle:
- cleaner separation of responsibilities(?);
- a Windows process handle [can be duplicated](https://learn.microsoft.com/en-us/windows/win32/api/handleapi/nf-handleapi-duplicatehandle);
#### 3. Should Handle return `*os.File` rather than `uintptr`?
Raw handle makes more sense in this case, and the finalizer set by `NewFile` does not make sense if the original handle is returned. Also, this won't work for Windows handle is not a file.
#### 4. Should this be Linux-specific?
Probably not. Since we have a boolean flag returned, we can implement it for all platforms and return `0, false` for those that do not support process handle (other than Windows and Linux). | Proposal | low | Minor |
2,660,150,424 | ui | I have two sidebar how to trigger each of them | I have two sidebar how to trigger each of them
I have right sidebar and left sidebar how to have close button for each one speartly ??? | area: request | low | Major |
2,660,151,985 | flutter | [dash forum?] Adoping new features for readability | ### Extension types
- https://github.com/flutter/flutter/pull/158466
<br>
(plus a couple of other topics, [see below](https://github.com/flutter/flutter/issues/158954#issuecomment-2484307494)!) | team,P3,team-framework,triaged-framework,d: docs/ | low | Major |
2,660,167,110 | rust | Large arrays of enum variants causes polonius to have massive performance issues | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
When compiling the following with `-Z polonius`, compilation takes a lot longer than it probably should (an entire 4 seconds on my machine) and it gets much worse with more elements in the array.
```rust
#[derive(Clone, Copy)]
pub enum A {
B,
}
use A::B;
static PROBLEM: [A; 736] = [
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B, B,
];
pub fn silly(b: usize) -> A {
PROBLEM[b]
}
```
This is a concern because crates like `unicode-linebreak` use large lookup tables of enum variants. In fact, `unicode-linebreak` with its 12996 element lookup table took longer than 30 minutes to compile on my machine before I had to stop it short.
Strangely enough, changing the type of `PROBLEM` to `&[A; 736]` makes this performance issue not happen.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (8adb4b30f 2024-11-13)
binary: rustc
commit-hash: 8adb4b30f40e6fbd21dc1ba26c3301c7eeb6de3c
commit-date: 2024-11-13
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
| I-compiletime,A-borrow-checker,T-compiler,C-bug,NLL-polonius,requires-nightly,T-types | low | Critical |
2,660,172,566 | kubernetes | kubectl edit: "You can run `kubectl replace -f FILE` to try this update again" misleading if user passed flags such as `--context` | ### What happened?
I ran `kubectl --context=dev edit statefulset/kafka` and got a permissions error (my GKE user did not have appropriate permissions). I had used `--context=dev` to select a particular cluster.
After the permissions error, kubectl printed
```
You can run `kubectl replace -f /var/folders/5y/55wpzs4n79v91k_2jf35354w0000gp/T/kubectl-edit-10910983.yaml` to try this update again.
```
I fixed my permission error (by giving my GKE user appropriate permissions) and ran the command above, but I got this error:
```
Error from server (Conflict): error when replacing "/var/folders/5y/55wpzs4n79v91k_2jf35354w0000gp/T/kubectl-edit-10910983.yaml": Operation cannot be fulfilled on statefulsets.apps "kafka": StorageError: invalid object, Code: 4, Key: /registry/statefulsets/kafka-default/kafka, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 545e2602-484d-4c17-91ce-52bddf06e81e, UID in object meta: a280573a-fcd0-4771-a302-c48d1d47972f
```
That is because the original `edit` command I ran included `--context=dev` and the suggested "try again" command did not — I was sending this command to the wrong cluster!
### What did you expect to happen?
Running the command printed by `kubectl edit` would run the same operation as my original operation, not talk to a different cluster.
Either:
- kubectl recognizes that relevant options like `--context` were passed by the user and includes them in the suggested command
- kubectl recognizes that relevant options like `--context` were passed by the user and decides not to suggest a command at all if it doesn't want to reproduce them
- kubectl includes more context from the original command in the file it writes and the command it suggests uses that full context
- The message printed could explicitly call out that you need to set things like context in the same way as the original command.
From my perspective, it's not particularly obvious that you *do* have to add `--context` yourself to the follow-up command but you *don't* have to add `--namespace`. I can reason it out based on having a somewhat sophisticated mental model of k8s/kubectl but telling people to run a write command that might talk to the wrong cluster seems like something to avoid!
### How can we reproduce it (as minimally and precisely as possible)?
The issue is pretty clear from [the source](https://github.com/kubernetes/kubernetes/blob/475ee33f698334e5b00c58d3bef4083840ec12c5/staging/src/k8s.io/kubectl/pkg/cmd/util/editor/editoptions.go#L400C28-L400C83): the suggested command never includes extra flags like `--context`.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.9-gke.1496000```
</details>
### Cloud provider
<details>
GKE
</details>
### OS version
<details>
```console
Darwin Davids-MacBook-Pro.local 22.6.0 Darwin Kernel Version 22.6.0: Thu Sep 5 20:47:01 PDT 2024; root:xnu-8796.141.3.708.1~1/RELEASE_ARM64_T6000 arm64
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/cli,needs-triage | low | Critical |
2,660,209,938 | flutter | CPU sampling broken for Dart CLI applications with VM developer options enabled. | Repro.
1. Run the flutter tool from source
```
cd packages/flutter_tools
dart --pause-isolates-on-start --observe lib/executable.dart doctor
```
Connect and open devtools
```
jonahwilliams-macbookpro3:flutter_tools jonahwilliams$ dart --pause-isolates-on-start --pause-isolates-on-exit --observe lib/executable.dart doctor
The Dart VM service is listening on http://127.0.0.1:8181/fyaRaipVUnQ=/
The Dart DevTools debugger and profiler is available at: http://127.0.0.1:8181/fyaRaipVUnQ=/devtools/?uri=ws://127.0.0.1:8181/fyaRaipVUnQ=/ws
vm-service: isolate(4954098837581171) 'main' has no debugger attached and is paused at start. Connect to the Dart VM service at http://127.0.0.1:8181/fyaRaipVUnQ=/ to debug.
^C
```
Resume Isolate and click "start recording". let this run for a second or so and then click stop.

Problems:
1. even with `--pause-isolates-on-exit ` the dart isolate immediately exits, which causes devtools to stop working. I hacked around this by adding an await
```diff
diff --git a/packages/flutter_tools/lib/runner.dart b/packages/flutter_tools/lib/runner.dart
index f1a46f9a9a..268ee416e0 100644
--- a/packages/flutter_tools/lib/runner.dart
+++ b/packages/flutter_tools/lib/runner.dart
@@ -3,6 +3,7 @@
// found in the LICENSE file.
import 'dart:async';
+import 'dart:developer' as dev;
import 'package:args/command_runner.dart';
import 'package:intl/intl.dart' as intl;
@@ -130,6 +131,8 @@ Future<int> run(
await runner.run(args);
+ await Future.delayed(Duration(hours: 16));
+
// Triggering [runZoned]'s error callback does not necessarily mean that
// we stopped executing the body. See https://github.com/dart-lang/sdk/issues/42150.
if (firstError == null) {
```
2. The CPU profile will never load, even though the isolate is not paused. If I open the browser console, i see what looks like parsing errors

I am running this on the beta channel, meaning the next stable.
```
[!] Flutter (Channel [user-branch], 3.27.0-1.0.pre.4, on macOS 14.7.1 23H222 darwin-arm64, locale en)
! Flutter version 3.27.0-1.0.pre.4 on channel [user-branch] at /Users/jonahwilliams/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
```
| tool,dependency: dart,d: devtools,P2,team-tool,dependency:dart-triaged | low | Critical |
2,660,225,195 | pytorch | torch.clear_autocast_cache is not traceable | ### 🐛 Describe the bug
When trying to `torch.compile` a module that contains `torch.clear_autocast_cache` we get the attached error. I believe this is expected but wondering if there is an established workaround
### Error logs
```
UserWarning: Graph break due to unsupported builtin torch.clear_autocast_cache. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
```
### Versions
```
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.4 (main, Nov 7 2024, 03:58:50) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1025-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-111
Off-line CPU(s) list: 112-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 0.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.0
[pip3] torch_scatter==2.1.2+pt23cu121
[pip3] torchmetrics==1.5.1
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,660,275,830 | terminal | GPT-4o | ### Description of the new feature
I would appreciate the option to switch the ChatGPT model to 4o, as I have better results with it when programming and communicating in non-English languages.
### Proposed technical implementation details
_No response_ | Issue-Feature,Product-Terminal,Needs-Tag-Fix,Area-Chat | low | Minor |
2,660,308,895 | godot | The lighting is breaking | ### Tested versions
4.4 dev4
### System information
Godot v4.4.dev4 - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 threads)
### Issue description
I noticed a few issues when migrating my project from 4.3 stable to 4.4 dev4.
1. Lighting in 4.4 dev4 does not look as intended:
4.3 (Good):

4.4 dev4 (Badly):

2. I also noticed another problem when I cleared the project cache to reduce its size, after deleting the .godot folder and all .import files, the lighting looked like this on 4.3 and 4.4:

### Steps to reproduce
1. Open MRP on 4.3 to check lighting
2. Now open it on 4.4 (Lighting is broken)
3. Delete project cache and open project again (Another lighting breakage)
### Minimal reproduction project (MRP)
[prj.zip](https://github.com/user-attachments/files/17759024/prj.zip) - 14.1 mb
| bug,topic:rendering,regression,topic:3d | low | Critical |
2,660,359,173 | pytorch | KeyError in default_cache_dir() when user account doesn't exist | ### 🐛 Describe the bug
The `torch._inductor` package creates a cache directory. If the `TORCHINDUCTOR_CACHE_DIR` env variable is not set, it defaults to `/tmp/torchinductor_{username}`, where `username` is determined from the python standard library `getpass.getuser()` function.
This function raises a `KeyError` if the user account does not exist. This is a common situation in production deployments where containers are often forced to run as an ordinary user for security reasons, but the user account isn't created in the container with `useradd` or similar.
It would be helpful to fall back to `/tmp/torchinductor` or whatever if getting the user name fails. The username part doesn't seem especially necessary, since I don't imagine that "multiple users on a shared machine" is the most common usage context.
Setting `TORCHINDUCTOR_CACHE_DIR` does work around the problem.
### Error logs
```
File "/usr/local/lib/python3.12/dist-packages/torch/_inductor/runtime/runtime_utils.py", line 137, in cache_dir
sanitized_username = re.sub(r'[\\/:*?"<>|]', "_", getpass.getuser())
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/getpass.py", line 169, in getuser
return pwd.getpwuid(os.getuid())[0]
^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
KeyError: 'getpwuid(): uid not found: 1001'
```
### Versions
Unable to run in production config, but the problematic code is present in `main` currently:
https://github.com/pytorch/pytorch/blob/e90888a93d9ebda9978b2828640a9372388dd74a/torch/_inductor/runtime/cache_dir_utils.py#L18-L23
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,660,367,486 | tensorflow | Issue on inference of converted to tflight Super Resolution model | ### 1. System information
- OS Linux Ubuntu 22.04
- TensorFlow installation from sources
- TensorFlow library version 2.16
### 2. Code
I converted model from tensorflow to tflight. I should use Flex tf ops as not all layers were converted initially
but finally model was converted successfully without errors with **tf ops**.
On inference I have an issue
**_RuntimeError: tensorflow/lite/kernels/reshape.cc:92 num_input_elements != num_output_elements (0 != 8)Node number 0 (RESHAPE) failed to prepare.Node number 360 (IF) failed to prepare._**
and can not use this model. Please, help with this issue!
Initial BasicVSR based model:
[ESWT-12-12_LSR_x4.pth.zip](https://github.com/user-attachments/files/17759174/ESWT-12-12_LSR_x4.pth.zip)
Tf model:
[sr.tf.zip](https://github.com/user-attachments/files/17758924/sr.tf.zip)
Tf light model:
[sr_12-12.tflight.zip](https://github.com/user-attachments/files/17759164/sr_12-12.tflight.zip)
This model initially appeared from Fried-Rice-Lab Super Resolution model based on BaseSVR
[Fried-Rice-Lab](https://github.com/Fried-Rice-Lab/FriedRiceLab?tab=readme-ov-file).
I downloded ESWT-12-12_LSR_x4.pth from their page [Google Drive](https://1drv.ms/u/s!AqKlMh-sml1mw362MfEjdr7orzds?e=budrUU)
This model was converted by scheme pth -> onnx -> tf -> tflight
Conversion script
_import numpy as np
import torch
from basicsr.models import build_model
from .utils import get_config
import onnx
import torchvision
import onnx_tf
import tensorflow as tf
from onnx import helper
def __init__(self, model_config_path, task_config_path, checkpoint_path):
self.opt = get_config(model_config_path, task_config_path, checkpoint_path)
self.device = torch.device('cpu')
self.model = build_model(self.opt).net_g.to(self.device).to(torch.float32).eval()
self.saveModel(self.model)_
_def saveModel(self, model):
modelName = "sr"
input_shape = (1, 3, 256, 256)
torch.onnx.export(model, torch.randn(input_shape), modelName + '-new.onnx', opset_version=12, input_names=['input'], output_names=['output'])
onnx_model = onnx.load(modelName + '-new.onnx')
# Convert ONNX model to TensorFlow format
tf_model = onnx_tf.backend.prepare(onnx_model)
# Export TensorFlow model
tf_model.export_graph(modelName + '.tf')
converter = tf.lite.TFLiteConverter.from_saved_model(modelName + '.tf')
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
open(modelName + '.tflite', 'wb').write(tflite_model)_
Attached sr.tf model works fine and generate adequate super resolution result. Inference script
but sr_12-12.tflite inference has crash issue as above.
### 3. Failure after conversion
If the conversion is successful, but the generated model is wrong, then state what is wrong:
- Model inference is crashes unexpected.
_packages/tensorflow/lite/python/interpreter.py", line 941, in invoke
self._interpreter.Invoke()
RuntimeError: tensorflow/lite/kernels/reshape.cc:92 num_input_elements != num_output_elements (0 != 8)Node number 0
(RESHAPE) failed to prepare.Node number 360 (IF) failed to prepare._
### 4. (optional) RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.
### 5. (optional) Any other info / logs
Error log
**2024-11-15 00:07:48.908511: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:gpu:0 with 3539 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4050 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.9
Traceback (most recent call last):
File "/home/me/infer/tflightRun.py", line 40, in <module>
res = model.predict(tensorflow_tensor)[0]
File "/home/me/infer/tflightRun.py", line 23, in predict
self.interpreter.invoke()
File "/home/me/.local/lib/python3.10/site-packages/tensorflow/lite/python/interpreter.py", line 941, in invoke
self._interpreter.Invoke()
RuntimeError: tensorflow/lite/kernels/reshape.cc:92 num_input_elements != num_output_elements (0 != 8)Node number 0 (RESHAPE) failed to prepare.Node number 360 (IF) failed to prepare.**
Script for tf inference running
_import tensorflow as tf
import numpy as np
from PIL import Image
import PIL
import torch
import torchvision
import torchvision.transforms as T
def swapChannelsInput(input_tensor):
input_tensor = input_tensor[tf.newaxis, ...]
out = input_tensor.numpy()
torchTensor = torch.from_numpy(out)
torchTensor = torchTensor.permute(0, 3, 1, 2)
np_arr = torchTensor.detach().cpu().numpy()
tensorflow_tensor = tf.constant(np_arr)
return tensorflow_tensor
def showOutput(res):
res = tf.squeeze(res)
res = res.numpy()
torchTensorRes = torch.from_numpy(res)
torchTensorRes = torchTensorRes.permute(1, 2, 0)
resFinal = torchTensorRes.detach().cpu().numpy()
return PIL.Image.fromarray(resFinal.astype(np.uint8))
extraction_path = "sr.tf/"
test_image_path = "frame0.jpg"
model = tf.saved_model.load(extraction_path)
infer = model.signatures["serving_default"]
image_np = np.array(Image.open(test_image_path))
input_tensor = tf.convert_to_tensor(image_np, tf.float32)
input_tensor = swapChannelsInput(input_tensor)
res = infer(tf.constant(input_tensor))['output']
showOutput(res).show()_
Script for tflight inference launching
import tensorflow as tf
import numpy as np
import cv2
from PIL import Image
import PIL
import torch
import torchvision
class TFLiteModel:
def __init__(self, model_path: str):
self.interpreter = tf.lite.Interpreter(model_path)
self.interpreter.allocate_tensors()
self.input_details = self.interpreter.get_input_details()
self.output_details = self.interpreter.get_output_details()
def predict(self, *data_args):
assert len(data_args) == len(self.input_details)
for data, details in zip(data_args, self.input_details):
self.interpreter.set_tensor(details["index"], data)
self.interpreter.invoke()
return self.interpreter.get_tensor(self.output_details[0]["index"])
model = TFLiteModel("sr_12-12.tflite")
test_image_path = "frame0.jpg"
image_np = np.array(Image.open(test_image_path))
input_tensor = tf.convert_to_tensor(image_np, tf.float32)
input_tensor = input_tensor[tf.newaxis, ...]
out = input_tensor.numpy()
torchTensor = torch.from_numpy(out)
torchTensor = torchTensor.permute(0, 3, 1, 2)
np_arr = torchTensor.detach().cpu().numpy()
tensorflow_tensor = tf.constant(np_arr)
res = model.predict(tensorflow_tensor)[0] | stat:awaiting response,type:bug,stale,comp:lite,TFLiteConverter,TF 2.16 | low | Critical |
2,660,383,638 | storybook | [Bug]: bare import path resolution doesn't work under Yarn PNPM linker | ### Describe the bug
SB attempts to strip all leading `node_modules/` components off of the path of an import to get Vite to bundle correctly:
https://github.com/storybookjs/storybook/blob/33e439766251689d3b30be4f532d44294a023c16/code/core/src/common/utils/strip-abs-node-modules-path.ts#L9-L17
This doesn't work correctly under the Yarn [PNPM linker](https://yarnpkg.com/features/linkers#nodelinker-pnpm), which constructs a `node_modules` tree via symlinks into a `node_modules/.store` directory. So, all packages are stored under `/node_modules/.store/<packagename>-virtual-<hash>/package/`, and `require.resolve` will return absolute paths into here. Thus, `stripAbsNodeModulesPath` is called since `node_modules` is in the path, and ti will return a path like `.store/...`, which is not a valid import path. Vite then throws errors about these paths:
```
3:37:05 PM [vite] Pre-transform error: Failed to resolve import ".store/@storybook-react-virtual-d2ec5c3452/package/dist/entry-preview.mjs" from "/virtual:/@storybook/builder-vite/vite-app.js". Does the file exist?
3:37:05 PM [vite] Internal server error: Failed to resolve import ".store/@storybook-react-virtual-d2ec5c3452/package/dist/entry-preview.mjs" from "/virtual:/@storybook/builder-vite/vite-app.js". Does the file exist?
Plugin: vite:import-analysis
File: /virtual:/@storybook/builder-vite/vite-app.js:7:81
5 |
6 | const getProjectAnnotations = async (hmrPreviewAnnotationModules = []) => {
7 | const configs = await Promise.all([hmrPreviewAnnotationModules[0] ?? import('.store/@storybook-react-virtual-d2ec5c3452/package/dist/entry-preview.mjs'),
| ^
8 | hmrPreviewAnnotationModules[1] ?? import('.store/@storybook-react-virtual-d2ec5c3452/package/dist/entry-preview-docs.mjs'),
```
### Reproduction link
https://github.com/ethanwu10/sb-yarn-pnpm-vite-repro
### Reproduction steps
1. Clone repro
2. Run `yarn storybook`
3. Observe errors from Vite + storybook iframe does not load
### System
```bash
System:
OS: macOS 14.7
CPU: (10) arm64 Apple M2 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 23.1.0 - /private/var/folders/44/y4hzbjt52nv8ddgrvy_p85g00000gn/T/xfs-d59be415/node
Yarn: 4.5.1 - /private/var/folders/44/y4hzbjt52nv8ddgrvy_p85g00000gn/T/xfs-d59be415/yarn <----- active
npm: 10.9.0 - /opt/homebrew/bin/npm
Browsers:
Safari: 18.0
npmPackages:
@storybook/addon-essentials: ^8.4.4 => 8.4.4
@storybook/addon-interactions: ^8.4.4 => 8.4.4
@storybook/addon-onboarding: ^8.4.4 => 8.4.4
@storybook/blocks: ^8.4.4 => 8.4.4
@storybook/react: ^8.4.4 => 8.4.4
@storybook/react-vite: ^8.4.4 => 8.4.4
@storybook/test: ^8.4.4 => 8.4.4
eslint-plugin-storybook: ^0.11.0 => 0.11.0
storybook: ^8.4.4 => 8.4.4
```
### Additional context
_No response_ | bug,help wanted,core,pnpm,yarn | low | Critical |
2,660,390,722 | PowerToys | Bug: Inconsistent Window Focus Switching with Win + PageUp/Down in FancyZones | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
FancyZones
### Steps to reproduce
1. Create 1 layout with 2 or more zones.
2. Open two or more windows in the same zone.
3. Minimize one of the windows.
4. Click on the window that remains visible in the zone.
5. Use the shortcuts **Win + PageUp** and **Win + PageDown** to see the focus bug.
If this is not enough to reproduce the issue, try:
1. Open Discord in one of the zones.
2. Minimize or close Discord.
3. Use the shortcuts **Win + PageUp** and **Win + PageDown**.
### ✔️ Expected Behavior
- Pressing **Win + PageUp** and **Win + PageDown** should cycle through the windows in each zone smoothly.
- Windows should gain focus and be brought to the front as they are selected.
### ❌ Actual Behavior
- Inconsistent behavior between zones:
- Sometimes, windows lose focus and are not brought to the front.
- This issue occurs randomly in any of the zones.
- Repeating the command multiple times sometimes brings the window to the front, but not consistently.
| Issue-Bug,Needs-Triage | low | Critical |
2,660,418,112 | react-native | [0.76] `KeyboardAvoidingView` animation issues since 0.76+ upgrade | ### Description
I've updated my project from 0.75.2 to 0.76.1 this morning, and I noticed that the `KeyboardAvoidingView`, which was previously animating properly, is no longer animating. It now jumps and doesn't follow the keyboard animation.
Not sure why in my project it doesn't work, but while creating the reproduction repo, I noticed the animation behaviour has changed and doesn't look native smooth anymore.
### Steps to reproduce
Any new project with KeyboardAvoidingView
### React Native Version
0.76.1
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.2
CPU: (10) arm64 Apple M1 Pro
Memory: 157.52 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.11.0
path: ~/.nvm/versions/node/v22.11.0/bin/node
Yarn:
version: 1.22.17
path: ~/.npm-global/bin/yarn
npm:
version: 10.9.0
path: ~/.nvm/versions/node/v22.11.0/bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/Alexis/.rbenv/shims/pod
```
### Stacktrace or Logs
```text
No crash
```
### Reproducer
https://github.com/alexmngn/reproducer-react-native-keyboard-avoiding
### Screenshots and Videos
### On my project
**Before update, 0.75.2**
https://github.com/user-attachments/assets/5089cd4d-fb2f-4bde-9a9b-a7790b989c23
**After update, 0.76.1**
https://github.com/user-attachments/assets/60bc6d95-c444-4492-887b-cdb7d4e562d1
### On the reproducer repo
**Before update, 0.75.2**
https://github.com/user-attachments/assets/83df6cab-b4c4-4698-a458-291ab37f982c
**After update, 0.76.1**
https://github.com/user-attachments/assets/8fa6d39c-88af-4527-b41c-d7ae2bc8e7a2
| Issue: Author Provided Repro,Impact: Regression,Component: KeyboardAvoidingView,API: Keyboard,Newer Patch Available,Needs: Attention | low | Critical |
2,660,440,542 | go | cmd/compile: TestScript/script_test_basics failures | ```
#!watchflakes
default <- pkg == "cmd/compile" && test == "TestScript/script_test_basics"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731270051656900545)):
=== RUN TestScript/script_test_basics
=== PAUSE TestScript/script_test_basics
=== CONT TestScript/script_test_basics
run.go:223: 2024-11-14T20:19:02Z
run.go:225: $WORK=/home/swarming/.swarming/w/ir/x/t/TestScriptscript_test_basics3024012314/001
run.go:232:
BOTO_CONFIG=/home/swarming/.swarming/w/ir/x/a/gsutil-bbagent/.boto
CIPD_ARCHITECTURE=arm64
CIPD_CACHE_DIR=/home/swarming/.swarming/w/ir/cache/cipd_cache
CIPD_PROTOCOL=v2
...
WORK=/home/swarming/.swarming/w/ir/x/t/TestScriptscript_test_basics3024012314/001
TMPDIR=/home/swarming/.swarming/w/ir/x/t/TestScriptscript_test_basics3024012314/001/tmp
# Test of the linker's script test harness. (21.847s)
> go build
> [!cgo] skip
[condition not met]
> cc -c testdata/mumble.c
run.go:232: FAIL: testdata/script/script_test_basics.txt:6: cc -c testdata/mumble.c: exec: WaitDelay expired before I/O complete
--- FAIL: TestScript/script_test_basics (21.87s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,OS-NetBSD,NeedsInvestigation,arch-arm64,compiler/runtime | low | Critical |
2,660,440,631 | go | net: TestUDPServer/0 failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestUDPServer/0"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731256705775492193)):
=== RUN TestUDPServer/0
server_test.go:258: udp :0<-127.0.0.1
server_test.go:315: client: read udp 127.0.0.1:51784: i/o timeout
server_test.go:322: server: read udp [::]:62172: i/o timeout
--- FAIL: TestUDPServer/0 (3600.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,660,442,015 | ui | [bug]: npx shdnc@latest url stuck at "Installing dependencies" | ### Describe the bug
When I run npx shdnc@latest url, the process gets stuck at the "Installing dependencies" step and does not proceed
### Affected component/components
Cannot install any component using the cli
### How to reproduce
Run npx shadcn@latest add "https://v0.dev/chat/b/b_wslkIiIMLhs?token=eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..pt0lLMnobX0YxVIb._Fh59fncD8EAU8BlmQgAaC25DK0KqCgq4R86zw28bkXG-Hrg83lQmkIynis.I42dFW87PMfK4Fq5j_Ht5g"
Observe that the process stalls at "Installing dependencies."
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Checking registry.
Installing dependencies
```
### System Info
```bash
Ubuntu 24.04
node v22.11.0
bun 1.1.34
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,660,453,289 | ui | [bug]: Dropdown Menu using wrong icons | ### Describe the bug
It looks like the new york style was recently updated to use lucide icons, but DropDownMenu is trying to use radix icons when installed with cli.
I changed
`import { CheckIcon, ChevronRightIcon, DotFilledIcon } from "@radix-ui/react-icons"`
to
`import { CheckIcon, ChevronRightIcon, CircleIcon } from "lucide-react"`
and changed the single usage of `DotFilledIcon` to `CircleIcon`.
### Affected component/components
Dropdown Menu
### How to reproduce
**EDIT: This repro is incorrect, see next issue comment**
- init shadcn using new york style
- `npx shadcn@latest add dropdown-menu`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11
Powershell 7
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,660,458,516 | rust | Add a run-make test that checks our `core::ffi::c_*` types against Clang | It would be good to have a test that our C interop types are always compatible with C (`c_char`, `c_long`, `c_longlong` all vary to some extent). This will more or less be what @taiki-e did at https://github.com/rust-lang/rust/issues/129945, just rolled into a run-make test rather than a shell script.
The test will basically need to do the following:
1. Loop through each target string in the output of `rustc --print target-list`
2. Map the rust target to a LLVM target if they aren't the same
3. Loop through a list of each `c_*` type available in `core::ffi`
4. Query `clang -E -dM -x c /dev/null -target LLVM_TARGET` to get a list of builtin definitions. Of note will be something like the following (comments added by me)
```c
#define __CHAR_BIT__ 8 // c_char
#define __CHAR_UNSIGNED__ 1 // char signedness
#define __SIZEOF_DOUBLE__ 8 // c_double
#define __SIZEOF_FLOAT__ 4 // c_float
#define __SIZEOF_INT__ 4 // c_int
#define __SIZEOF_LONG_DOUBLE__ 16 // c_longdouble
#define __SIZEOF_LONG_LONG__ 8 // c_longlong
#define __SIZEOF_LONG__ 4 // c_long
#define __SIZEOF_POINTER__ 4 // *const c_void
#define __SIZEOF_PTRDIFF_T__ 4 // c_ptrdiff_t
#define __SIZEOF_SHORT__ 2 // c_short
#define __SIZEOF_SIZE_T__ 4 // c_size_t
#define __BOOL_WIDTH__ 8 // bool
#define __INTPTR_WIDTH__ 32 // isize
#define __INT_WIDTH__ 32 // c_int
#define __LLONG_WIDTH__ 64 // c_longlong
#define __LONG_WIDTH__ 32 // c_long
#define __POINTER_WIDTH__ 32 // *const c_void
#define __PTRDIFF_WIDTH__ 32 // c_ptrdiff_t
#define __SHRT_WIDTH__ 16 // c_short
#define __SIZE_WIDTH__ 32 // c_size_t
#define __UINTPTR_WIDTH__ 32 // usize
```
5. Use the above to construct a simple Rust program that can verify the sizes and (when applicable) signedness line up at compile time. Probably do an assignment that will catch mismatched types plus a const assert for anything that doesn't have literals.
```rust
// input to check `c_short`
const C_SHORT: u{c_sizeof_short} = 0;
const RUST_SHORT: core::ffi::c_short = C_SHORT;
```rust
// input to check pointer width
const _: () = assert!(size_of::<*const core::ffi::c_void>() * 8 == {c_pointer_width});
```
6. Run `rustc -Z no-codegen` and check success/failure against a list of xfail targets.
`run_make_support` has [`clang`](https://doc.rust-lang.org/nightly/nightly-rustc/run_make_support/external_deps/clang/index.html) available, example usage: https://github.com/rust-lang/rust/blob/e84902d35a4d3039c794e139eb12fba3624c5ff1/tests/run-make/cross-lang-lto-clang/rmake.rs. There is probably some way we could check this on MSVC targets too.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"ricci009"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-testsuite,E-hard,A-FFI,E-needs-test,T-libs,E-needs-design | low | Critical |
2,660,459,634 | PowerToys | Always On Top does not work for some applications | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Always on Top
### Steps to reproduce
Always On Top feature does not work for some applications
1. MobaXTerm
2. VMWare Workstation
3. Netcam Studio X
Is it because of the MDI window implementation? Any workaround btw? This drives me nuts cos its nearly perfect
### ✔️ Expected Behavior
Support for these apps described above
### ❌ Actual Behavior
-
### Other Software
1. MobaXTerm
2. VMWare Workstation
3. Netcam Studio X | Issue-Bug,Needs-Triage | low | Minor |
2,660,459,770 | yt-dlp | [gem.cbc.ca] Some episodes have broken audio formats | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Canada and possibly USA
### Provide a description that is worded well enough to be understood
This is the first time I have attempted to download videos from gem.cbc.ca.
I had the same problem on these two shows:
[Heartland | Season 18 | CBC Gem](https://gem.cbc.ca/heartland/s18)
8 items
`yt-dlp --version
2024.11.04`
```
u=https://gem.cbc.ca/heartland/s18
yt-dlp -u $user -p $pw -F "$u"
hls-audio_0-English__Descriptive_ mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English (Descriptive)
hls-audio_1-English__Descriptive_ mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English (Descriptive)
hls-audio_2-English__Descriptive_ mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English (Descriptive)
hls-audio_0-English mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English
hls-audio_1-English mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English
hls-audio_2-English mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English
hls-321 mp4 416x234 10 │ ~101.46MiB 321k m3u8 │ avc1.42C00C 321k video only
hls-431 mp4 416x234 15 │ ~136.22MiB 431k m3u8 │ avc1.42C00C 431k video only
hls-651 mp4 416x234 30 │ ~205.72MiB 651k m3u8 │ avc1.42C00D 651k video only
hls-871 mp4 640x360 30 │ ~275.22MiB 871k m3u8 │ avc1.42C01E 871k video only
hls-1311 mp4 640x360 30 │ ~414.22MiB 1311k m3u8 │ avc1.42C01E 1311k video only
hls-2191 mp4 960x540 30 │ ~692.20MiB 2191k m3u8 │ avc1.4D401F 2191k video only
hls-2961 mp4 1280x720 30 │ ~935.43MiB 2961k m3u8 │ avc1.4D401F 2961k video only
hls-3951 mp4 1280x720 30 │ ~ 1.22GiB 3951k m3u8 │ avc1.4D401F 3951k video only
hls-6811 mp4 1920x1080 30 │ ~ 2.10GiB 6811k m3u8 │ avc1.640028 6811k video only
```
[Downton Abbey | Season 1 | CBC Gem](https://gem.cbc.ca/downton-abbey/s01)
8 items
```
u=https://gem.cbc.ca/downton-abbey/s01
yt-dlp -u $user -p $pw -F "$u"
hls-audio_0-English__Descriptive_ mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English (Descriptive)
hls-audio_1-English__Descriptive_ mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English (Descriptive)
hls-audio_2-English__Descriptive_ mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English (Descriptive)
hls-audio_0-English mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English
hls-audio_1-English mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English
hls-audio_2-English mp4 audio only │ m3u8 │ audio only mp4a.40.2 [eng] English
hls-321-0 mp4 416x234 10 │ ~150.35MiB 321k m3u8 │ avc1.42C00C 321k video only
hls-321-1 mp4 416x234 10 │ ~150.35MiB 321k m3u8 │ avc1.42C00C 321k video only
hls-431-0 mp4 416x234 15 │ ~201.86MiB 431k m3u8 │ avc1.42C00C 431k video only
hls-431-1 mp4 416x234 15 │ ~201.87MiB 431k m3u8 │ avc1.42C00C 431k video only
hls-651 mp4 416x234 30 │ ~304.87MiB 651k m3u8 │ avc1.42C00D 651k video only
hls-871 mp4 640x360 30 │ ~407.86MiB 871k m3u8 │ avc1.42C01E 871k video only
hls-1311 mp4 640x360 30 │ ~613.87MiB 1311k m3u8 │ avc1.42C01E 1311k video only
hls-2191 mp4 960x540 30 │ ~ 1.00GiB 2191k m3u8 │ avc1.4D401F 2191k video only
hls-2961 mp4 1280x720 30 │ ~ 1.35GiB 2961k m3u8 │ avc1.4D401F 2961k video only
hls-3951 mp4 1280x720 30 │ ~ 1.81GiB 3952k m3u8 │ avc1.4D401F 3952k video only
hls-6811 mp4 1920x1080 30 │ ~ 3.11GiB 6812k m3u8 │ avc1.640028 6812k video only
```
Attempted to download playlist item 3:
```
u=https://gem.cbc.ca/downton-abbey/s01
yt-dlp -vU --embed-subs --no-warnings -u $user -p $pw --concurrent-fragments 20 --playlist-items 3 -o "%(playlist_index-1)02d %(title)s - %(id)s - %(resolution)s %(format_id)s.%(ext)s" $u`
I hadn't run the command with the -vU flag before but the debug output doesn't help me in any case. As you can see, the default bestaudio format chosen is `hls-audio_2-English`. Herin lies the problem. Note that both the video and the audio files downloaded without a problem and the intermediary file was created successfully.
Video file:
Downton Abbey/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-6812.mp4
49 min 41 s
Audio file:
Downton Abbey/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.temp.mp4
35 min 35 s
Intermediary file:
Downton Abbey/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-audio_2-English.mp4
49 min 46 s
So the problem must be "Error muxing a packet", due to a mismatch in the duration of each file.
When I specified the format `-f hls-6811+hls-audio_1-English`, the second best audio track, the files merged successfully because the duration was the same as the video file.
I don't know why the best audio file would be a shorter duration like that.
Is there a way to automatically test for this situation before selecting the file to download or would it be difficult to implement?
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--embed-subs', '--no-warnings', '-u', 'PRIVATE', '-p', 'PRIVATE', '--concurrent-fragments', '20', '--playlist-items', '3', '-o', 'Downton Abbey/test/%(playlist_index-1)02d %(title)s - %(id)s - %(resolution)s %(format_id)s.%(ext)s', 'https://gem.cbc.ca/downton-abbey/s01']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.11.12.232900 from yt-dlp/yt-dlp-nightly-builds [f2a4983df] (darwin_exe)
[debug] Python 3.12.7 (CPython x86_64 64bit) - macOS-13.6.5-x86_64-i386-64bit (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1-tessus (setts), ffprobe 7.1-tessus, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.11.12.232900 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.11.12.232900 from yt-dlp/yt-dlp-nightly-builds)
[gem.cbc.ca:playlist] Extracting URL: https://gem.cbc.ca/downton-abbey/s01
[gem.cbc.ca:playlist] downton-abbey/s01: Downloading JSON metadata
[download] Downloading playlist: Season 1
[gem.cbc.ca:playlist] Playlist Season 1: Downloading 1 items of 8
[download] Downloading item 1 of 1
[debug] Using fake IP 99.229.56.236 (CA) as X-Forwarded-For
[debug] Loading cbcgem.claims_token from cache
[gem.cbc.ca] Extracting URL: https://gem.cbc.ca/media/downton-abbey/s01e02
[gem.cbc.ca] downton-abbey/s01e02: Downloading JSON metadata
[gem.cbc.ca] downton-abbey/s01e02: Downloading JSON metadata
[gem.cbc.ca] downton-abbey/s01e02: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] downton-abbey/s01e02: Downloading 1 format(s): hls-6812+hls-audio_2-English
[info] There are no subtitles for the requested languages
[debug] Invoking hlsnative downloader on "https://cbcrcott-aws-gem.akamaized.net/hdntl=exp=1731718281~acl=%2f*~data=hdntl~hmac=dd35076ac014891b2dc485a3a730f3e8570274413896b24b2c2a2ecc0038a8b1/out/v1/bbff1b20e0c04d71b9dca94fcf62e9d1/026bf27581e640c4b4fb78ae5aaa5021/4992c413374a4be4af23a7b5453f28df/8bfaea6bd68d41f997b3e25192a3b4a4/6baf985d549245c48f39e42930e10a5e/index-aes_1.m3u8?aka_me_session_id=AAAAAAAAAACJ7DdnAAAAAPjEDEC4ICuh1eVFmZezPWo6n0FVJkSkC0lV1MK5TdIHoIz3hKJlKDDiIld7JePYnKl4c58cKqq3&aka_media_format_type=hls&pckgrp=bf5b9c2800b7e86d48330ceb5add54a4"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 993
[download] Destination: Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-6812.mp4
[download] 100% of 2.15GiB in 00:02:10 at 16.85MiB/s
[debug] Invoking hlsnative downloader on "https://cbcrcott-aws-gem.akamaized.net/hdntl=exp=1731718281~acl=%2f*~data=hdntl~hmac=dd35076ac014891b2dc485a3a730f3e8570274413896b24b2c2a2ecc0038a8b1/out/v1/bbff1b20e0c04d71b9dca94fcf62e9d1/026bf27581e640c4b4fb78ae5aaa5021/4992c413374a4be4af23a7b5453f28df/8bfaea6bd68d41f997b3e25192a3b4a4/6baf985d549245c48f39e42930e10a5e/index-aes_16_0.m3u8?aka_me_session_id=AAAAAAAAAACJ7DdnAAAAAPjEDEC4ICuh1eVFmZezPWo6n0FVJkSkC0lV1MK5TdIHoIz3hKJlKDDiIld7JePYnKl4c58cKqq3&aka_media_format_type=hls&pckgrp=bf5b9c2800b7e86d48330ceb5add54a4"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 993
[download] Destination: Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-audio_2-English.mp4
[download] 100% of 45.57MiB in 00:00:14 at 3.20MiB/s
[debug] ffmpeg command line: ffprobe -show_streams 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-audio_2-English.mp4'
[Merger] Merging formats into "Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-6812.mp4' -i 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-audio_2-English.mp4' -c copy -map 0:v:0 -map 1:a:0 -movflags +faststart 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.temp.mp4'
[debug] ffmpeg version 7.1-tessus https://evermeet.cx/ffmpeg/ Copyright (c) 2000-2024 the FFmpeg developers
built with Apple clang version 16.0.0 (clang-1600.0.26.3)
configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libharfbuzz --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.100 / 61. 19.100
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
Input #0, mpegts, from 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-6812.mp4':
Duration: 00:49:41.98, start: 2.095867, bitrate: 6189 kb/s
Program 1
Stream #0:0[0x1e1]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn
[eac3 @ 0x7fab0e904b40] Estimating duration from bitrate, this may be inaccurate
Input #1, eac3, from 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.fhls-audio_2-English.mp4':
Duration: 00:49:46.51, start: 0.000000, bitrate: 128 kb/s
Stream #1:0: Audio: eac3, 48000 Hz, stereo, fltp, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (copy)
[mp4 @ 0x7fab0ea04c80] track 1: codec frame size is not set
Output #0, mp4, to 'file:Downton Abbey/test/02 Episode 2 - downton-abbey⧸s01e02 - 1920x1080 hls-6812+hls-audio_2-English.temp.mp4':
Metadata:
encoder : Lavf61.7.100
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 29.97 fps, 29.97 tbr, 90k tbn
Stream #0:1: Audio: eac3 (ec-3 / 0x332D6365), 48000 Hz, stereo, fltp, 128 kb/s
Press [q] to stop, [?] for help
frame= 2584 fps=0.0 q=-1.0 size= 64256KiB time=00:01:26.31 bitrate=6098.1kbits/s speed= 171x
frame= 5088 fps=5036 q=-1.0 size= 126976KiB time=00:02:49.85 bitrate=6123.9kbits/s speed= 168x
frame= 6316 fps=4169 q=-1.0 size= 157440KiB time=00:03:30.74 bitrate=6120.0kbits/s speed= 139x
frame= 7382 fps=3657 q=-1.0 size= 184064KiB time=00:04:06.44 bitrate=6118.4kbits/s speed= 122x
frame= 8040 fps=3187 q=-1.0 size= 200448KiB time=00:04:28.36 bitrate=6118.7kbits/s speed= 106x
frame= 9717 fps=3209 q=-1.0 size= 242432KiB time=00:05:24.29 bitrate=6124.1kbits/s speed= 107x
frame=11073 fps=3135 q=-1.0 size= 276224KiB time=00:06:09.53 bitrate=6123.4kbits/s speed= 105x
frame=13745 fps=3406 q=-1.0 size= 342784KiB time=00:07:38.72 bitrate=6121.5kbits/s speed= 114x
frame=14802 fps=3258 q=-1.0 size= 369408KiB time=00:08:13.99 bitrate=6126.0kbits/s speed= 109x
frame=15498 fps=3071 q=-1.0 size= 386816KiB time=00:08:37.21 bitrate=6126.6kbits/s speed= 103x
frame=16902 fps=3045 q=-1.0 size= 421632KiB time=00:09:24.03 bitrate=6123.8kbits/s speed= 102x
frame=18374 fps=3034 q=-1.0 size= 458496KiB time=00:10:13.21 bitrate=6125.1kbits/s speed= 101x
frame=20299 fps=3090 q=-1.0 size= 506624KiB time=00:11:17.37 bitrate=6127.0kbits/s speed= 103x
frame=21266 fps=3006 q=-1.0 size= 530688KiB time=00:11:49.57 bitrate=6126.8kbits/s speed= 100x
frame=23340 fps=3080 q=-1.0 size= 582400KiB time=00:12:58.84 bitrate=6125.8kbits/s speed= 103x
frame=24902 fps=3078 q=-1.0 size= 621312KiB time=00:13:50.99 bitrate=6124.9kbits/s speed= 103x
frame=24902 fps=2897 q=-1.0 size= 621312KiB time=00:13:50.99 bitrate=6124.9kbits/s speed=96.7x
frame=25886 fps=2844 q=-1.0 size= 645888KiB time=00:14:23.82 bitrate=6125.2kbits/s speed=94.9x
frame=28757 fps=2994 q=-1.0 size= 717568KiB time=00:15:59.62 bitrate=6125.6kbits/s speed=99.9x
frame=29332 fps=2901 q=-1.0 size= 731904KiB time=00:16:18.77 bitrate=6125.8kbits/s speed=96.8x
frame=29617 fps=2788 q=-1.0 size= 739072KiB time=00:16:28.28 bitrate=6126.2kbits/s speed= 93x
frame=31621 fps=2842 q=-1.0 size= 789248KiB time=00:17:35.18 bitrate=6127.4kbits/s speed=94.8x
frame=33068 fps=2843 q=-1.0 size= 825344KiB time=00:18:23.36 bitrate=6127.8kbits/s speed=94.9x
frame=33269 fps=2741 q=-1.0 size= 830208KiB time=00:18:30.17 bitrate=6126.1kbits/s speed=91.5x
frame=34419 fps=2721 q=-1.0 size= 858880KiB time=00:19:08.51 bitrate=6126.1kbits/s speed=90.8x
frame=36577 fps=2780 q=-1.0 size= 912896KiB time=00:20:20.41 bitrate=6127.8kbits/s speed=92.8x
frame=38644 fps=2829 q=-1.0 size= 964352KiB time=00:21:29.52 bitrate=6126.3kbits/s speed=94.4x
frame=39588 fps=2795 q=-1.0 size= 987904KiB time=00:22:01.01 bitrate=6126.3kbits/s speed=93.3x
frame=40591 fps=2767 q=-1.0 size= 1012992KiB time=00:22:34.48 bitrate=6126.6kbits/s speed=92.3x
frame=42191 fps=2780 q=-1.0 size= 1052928KiB time=00:23:27.87 bitrate=6126.7kbits/s speed=92.8x
frame=44326 fps=2827 q=-1.0 size= 1106176KiB time=00:24:39.11 bitrate=6126.5kbits/s speed=94.3x
frame=46130 fps=2851 q=-1.0 size= 1151232KiB time=00:25:39.30 bitrate=6126.7kbits/s speed=95.1x
frame=48022 fps=2878 q=-1.0 size= 1198592KiB time=00:26:42.36 bitrate=6127.7kbits/s speed= 96x
frame=49861 fps=2900 q=-1.0 size= 1244416KiB time=00:27:43.79 bitrate=6127.1kbits/s speed=96.8x
frame=52116 fps=2944 q=-1.0 size= 1300736KiB time=00:28:59.00 bitrate=6127.4kbits/s speed=98.2x
frame=53721 fps=2950 q=-1.0 size= 1340672KiB time=00:29:52.59 bitrate=6126.8kbits/s speed=98.4x
frame=55521 fps=2967 q=-1.0 size= 1385728KiB time=00:30:52.61 bitrate=6127.5kbits/s speed= 99x
frame=57944 fps=3015 q=-1.0 size= 1446144KiB time=00:32:13.49 bitrate=6127.1kbits/s speed= 101x
frame=60321 fps=3058 q=-1.0 size= 1505536KiB time=00:33:32.81 bitrate=6127.4kbits/s speed= 102x
frame=62085 fps=3069 q=-1.0 size= 1549568KiB time=00:34:31.63 bitrate=6127.6kbits/s speed= 102x
[vost#0:0/copy @ 0x7fab0ea0d780] Error submitting a packet to the muxer: Invalid data found when processing input
[vost#0:0/copy @ 0x7fab0ea0d780] Error submitting a packet to the muxer: Invalid data found when processing input
[out#0/mp4 @ 0x7fab0ea04bc0] Error muxing a packet
[out#0/mp4 @ 0x7fab0ea04bc0] Task finished with error code: -1094995529 (Invalid data found when processing input)
[out#0/mp4 @ 0x7fab0ea04bc0] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[mp4 @ 0x7fab0ea04c80] Starting second pass: moving the moov atom to the beginning of the file
[out#0/mp4 @ 0x7fab0ea04bc0] Error writing trailer: Invalid data found when processing input
[out#0/mp4 @ 0x7fab0ea04bc0] video:1564089KiB audio:33415KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: 0.083344%
frame=63994 fps=1400 q=-1.0 Lsize= 1598835KiB time=00:35:35.29 bitrate=6133.9kbits/s speed=46.7x
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "yt_dlp/YoutubeDL.py", line 3557, in process_info
File "yt_dlp/YoutubeDL.py", line 3741, in post_process
File "yt_dlp/YoutubeDL.py", line 3723, in run_all_pps
File "yt_dlp/YoutubeDL.py", line 3701, in run_pp
File "yt_dlp/postprocessor/common.py", line 23, in run
File "yt_dlp/postprocessor/common.py", line 128, in wrapper
File "yt_dlp/postprocessor/ffmpeg.py", line 840, in run
File "yt_dlp/postprocessor/ffmpeg.py", line 330, in run_ffmpeg_multiple_files
File "yt_dlp/postprocessor/ffmpeg.py", line 368, in real_run_ffmpeg
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
[download] Finished downloading playlist: Season 1
```
| account-needed,geo-blocked,site-bug,triage | low | Critical |
2,660,460,601 | godot | Engine supports multiple translation objects per locale but `TranslationServer.get_translation_object` is only capable of fetching one | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads)
### Issue description
The engine supports adding multiple translation objects per locale, but `TranslationServer.get_translation_object` is only capable of fetching one of them. For example, from the project settings, you can add one file for Spanish UI translations, and another one for Spanish dialog translations. Both of these translation resources will work, and provide translations properly in the game. However, if you call `TranslationServer.get_translation_object("es")`, you will only get back one of them. The other will not be accessible at all.
If the intended usage is that you should only have one translation object per locale, then the project settings UI should reflect and enforce this. If it is intended that multiple are allowed, then the API for `TranslationServer` should support fetching *all* translation objects for a locale, or at least fetch the entire list of translation objects.
### Steps to reproduce
See above.
### Minimal reproduction project (MRP)
N/A. See above. | discussion,topic:gui | low | Minor |
2,660,507,593 | flutter | Is `--run-skipped` intended to work on flutter/flutter? | Both `dart test` and `flutter test` support a `--run-skipped`, which is intended to mean "run tests otherwise that would be skipped". Ideally it would be used to temporarily (either locally or on a `bringup: true` task) run tests that are flaking or failing for reasons that we don't want to affect the mainline tree or development experience.
For example:
```sh
cd flutter/packages/flutter_tools
dart test test/general.shard --run-skipped
```
However, sometimes we use skip to mean the test _can't_ run in this configuration, such as:
```dart
test('...', () {
// ...
}, skip: !Platform.isWindows ? 'Test only runs on windows' : false);
```
In that case, `--run-skipped` not only can _never_ work, but now the feature cannot be used across the codebase.
For Dart (`dart test`-instrumented) tests, we could use `@TestOn` instead:
```dart
@TestOn('windows')
library;
// ...
void main() {
test('...', () {
});
}
```
However, my understanding is this feature does not work in `flutter test`, so it would not work everywhere/we would diverge.
It would be awesome to have some instruction, outside of commenting code in/out, how we could run skipped tests but also not run tests that are never intended to run in a given configuration or platform. Ideally (to me) that would mean using `--run-skipped` and perhaps using/supporting `@TestOn` broadly, but that is obviously more work. | a: tests,team,c: proposal,P3,c: tech-debt,team-tool,triaged-tool | low | Minor |
2,660,513,533 | yt-dlp | Site Support Request for Fansly.com (NSFW) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
United States
### Example URLs
- Single video: https://fansly.com/post/713619348626874370
- Live stream, username version: https://fansly.com/live/YuukoVT
- Live stream, user ID version: https://fansly.com/live/419672342336118784
### Provide a description that is worded well enough to be understood
Fansly is a livestream platform for content creators. There is a combination of SFW and NSFW content. I would like to be able to record free livestreams of interest using yt-dlp, but it fails at the task.
Fansly also has vods of past streams. It would be helpful to be able to record these as well.
For testing yt-dlp, discovering currently live streams on fansly is a challenge because there is no global list of everyone who is live. However, the webapp will suggest live channels you are following. This means that it's best to follow many channels so at any given time you have a good chance of having one of your followed channels being live. I have an account following hundreds of channels which I would be willing to share.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://fansly.com/live/YuukoVT']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.07.25 from yt-dlp/yt-dlp [f0993391e] (zip)
[debug] Python 3.12.6 (CPython x86_64 64bit) - Linux-6.10.13-3-MANJARO-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, rtmpdump 2.4
[debug] Optional libraries: certifi-2024.08.30, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1829 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.11.04/SHA2-256SUMS
Current version: stable@2024.07.25 from yt-dlp/yt-dlp
Latest version: stable@2024.11.04 from yt-dlp/yt-dlp
Current Build Hash: 72a16e7a277643eb9e42046737e263e63ec75c6e4532914c918883ebe64db527
Updating to stable@2024.11.04 from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp from https://github.com/yt-dlp/yt-dlp/releases/download/2024.11.04/yt-dlp
Updated yt-dlp to stable@2024.11.04 from yt-dlp/yt-dlp
[debug] Restarting: python3 /home/cj/.local/bin/yt-dlp -vU https://fansly.com/live/YuukoVT
[debug] Command-line config: ['-vU', 'https://fansly.com/live/YuukoVT']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.11.04 from yt-dlp/yt-dlp [197d0b03b] (zip)
[debug] Python 3.12.6 (CPython x86_64 64bit) - Linux-6.10.13-3-MANJARO-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, rtmpdump 2.4
[debug] Optional libraries: certifi-2024.08.30, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.11.04 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.11.04 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://fansly.com/live/YuukoVT
[generic] YuukoVT: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] YuukoVT: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://fansly.com/live/YuukoVT
Traceback (most recent call last):
File "/home/cj/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cj/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1760, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/home/cj/.local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cj/.local/bin/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://fansly.com/live/YuukoVT
```
| site-request,NSFW,account-needed,triage,can-share-account | low | Critical |
2,660,515,189 | transformers | Translating attention.md to Chinese | Hi!
I have translated attention.md into Chinese and here is my PR #34716 .
I would appreciate it if someone could review and comment on it.Thanks!!
| Documentation,WIP | low | Minor |
2,660,527,709 | godot | Allowing PinJoint2D attached RigidBody2D to become perfectly still causes them to ghost in the direction of velocity. | ### Tested versions
- Reproducible in 4.0.stable, 4.1.stable, .4.2.stable, 4.3.stable, 4.4.dev4
### System information
Godot v4.3.stable.mono - Windows 10.0.22621 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (4 Threads)
### Issue description
When a PinJoint2D attached RigidBody2D becomes perfectly still, the RigidBody2D will begin to ghost forward in the direction of its velocity when it begins to move again.
### Steps to reproduce
Drive the player vehicle slowly into either side of the platform, when it is lodged in the corner it will become perfectly still.
Leave it for a few seconds, after which the wheels will begin to ghost in front of the vehicle towards the direction it is moving when you begin to drive it again.
### Minimal reproduction project (MRP)
[bugtest.zip](https://github.com/user-attachments/files/17760596/bugtest.zip)
| bug,topic:physics,topic:2d | low | Critical |
2,660,530,355 | pytorch | Floating point exception (core dumped) in `torch.sparse.sampled_addmm` | ### 🐛 Describe the bug
Under specific inputs, torch.sparse.sampled_addmm triggered a crash.
```
https://colab.research.google.com/drive/1FTI99hk9H25wz_ZvrqJQGMlxadeLyQ3V?usp=sharing
```
output:
```
ed1e28.py:5: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at ../aten/src/ATen/SparseCsrTensorImpl.cpp:53.)
mask = torch.sparse_coo_tensor(torch.stack([torch.arange(N), torch.arange(N)], dim=0), torch.ones(N)).cuda().to_sparse_csr()
Floating point exception (core dumped)
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_
tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x
2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanc
ed tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni
avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_ep
p hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm m
d_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged | low | Critical |
2,660,540,502 | ui | [bug]: Sheet Component Using Wrong Icon | ### Describe the bug
The CLI still use `import { Cross2Icon } from "@radix-ui/react-icons"` but it should be `import { X } from "lucide-react"` and `<Cross2Icon className="h-4 w-4" />` should be `<X className="h-4 w-4" />`
### Affected component/components
Sheet
### How to reproduce
- Init Shadcn with new york style
- `npx shadcn@latest add sheet`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS Ventura
npm 10.8.3
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,660,647,346 | ui | [feat]: add year through dropdown in date picker instead of scrolling | ### Feature description
The clicking of buttons is alot for a user who wishes to choose a date one decade from now.
One possible fix to this can be to add dates by keyboard, or we can also add a dropdown which lists all the years upon selection.

### Affected component/components
date picker, calender, popover
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,660,752,646 | deno | Add Support for Nested Deno Workspaces with Domain/Subdomain | ### 💡 Feature Request: Nested Deno Workspaces
**Description**
Add support for nested workspace configuration in Deno projects to better organize code by domains and subdomains, similar to how pnpm/yarn workspaces work for Node.js.
**Current Situation**
- Currently, Deno projects are typically structured as single workspaces
- No built-in support for organizing multiple related packages in a monorepo structure
- Challenging to maintain domain-driven design with current workspace limitations
**Proposed Solution**
Support nested workspace configuration through:
1. Extended `deno.json` workspace configuration:
```json
{
"workspace": {
"domains": {
"core": "./domains/core",
"auth": "./domains/auth",
"api": {
"public": "./domains/api/public",
"internal": "./domains/api/internal"
}
}
}
}
basically i want to have domain and subdomain workspace differently in same repository. | suggestion,needs info,workspaces | low | Major |
2,660,776,713 | angular | Documentation should explain the secondary endpoints for library creation | ### Describe the problem that you experienced
Currently angular documentation doesn't explain about secondary endpoints for library creation https://github.com/ng-packagr/ng-packagr/blob/main/docs/secondary-entrypoints.md . I struggled it for 3 days without this piece of information.
### Enter the URL of the topic with the problem
https://angular.dev/tools/libraries/creating-libraries
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | area: docs | low | Critical |
2,660,826,500 | angular | HttpClient fails to capture error when an JSON parse error occurs due to string size limit and instead sets response to null and completes without error. | ### Which @angular/* package(s) are the source of the bug?
common
### Is this a regression?
No
### Background
Chromium based browsers have a max string size of 512 MiB.
We are trying to fetch a large amount of JSON data to the browser.
The request is a POST request which is streamed from a Spring Boot REST API with Spring Boots _StreamingResponseBody_
### Bug
When trying to fetch more than 512MiB of JSON data HttpClient Seems to not cope with it. Instead of getting an error, the request completes like it would be a success, but the response body is null.
```
//fetch 600mb of JSON data
this.salesRowService.getSalesRows().subscribe(
{
next: salesResponse => {
console.log(salesResponse); // null
},
error:(error) => {
//is also not an error
}
}
);
```
When doing the same request with jQuery, the error callback is executed as expected, with:
`statusText: parseerror`
### Expectation
I expect the error() callback to be executed instead of the next() with a null response
### Notes
This only happens on Chromium. Firefox has a larger max string size.
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.5
Node: 22.9.0
Package Manager: npm 10.8.3
OS: win32 x64
Angular: 18.2.5
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, language-service, localize, platform-browser
... platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.5
@angular-devkit/build-angular 18.2.5
@angular-devkit/core 18.2.5
@angular-devkit/schematics 18.2.5
@schematics/angular 18.2.5
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
``` | breaking changes,area: common/http | low | Critical |
2,660,844,486 | pytorch | Apple Silicon Unified Memory usage | ### 🚀 The feature, motivation and pitch
Since Mac with apple silicon has unified memory, why in PyTorch we still need to copy tensors from cpu and mps? This will double the memory usage? Can we use MTLStorageMode.shared which is explained in metal documentation in https://developer.apple.com/documentation/metal/resource_fundamentals/choosing_a_resource_storage_mode_for_apple_gpus
### Alternatives
_No response_
### Additional context
_No response_
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,enhancement,module: mps | low | Minor |
2,660,920,543 | ollama | The fine tuned codegemma model exhibits abnormal performance | ### What is the issue?
I downloaded the codegemma and codellama models from Huggingface and fine tuned them using llama factory. After importing the fine tuned model into Ollama, Codellama works normally, while the Codegemma model seems to have not learned the knowledge of the fine tuned dataset. Similarly, importing the fine tuned codegemma model into llama factory works normally. I have made multiple modifications to the modelfile file when creating the codegemma, but it has not been effective. May I ask what the reason is and how can I resolve it? thank you
ollama:0.4.1
llama factory:0.8.3
codegemma:https://huggingface.co/google/codegemma-7b
codellama:https://huggingface.co/codellama/CodeLlama-7b-hf
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1 | bug,needs more info | low | Major |
2,660,949,059 | vscode | Have a way to interact with things in the account menu via the Command Palette | In case the Account menu is hidden, we should have an alternative way to interact with accounts | feature-request,authentication | low | Minor |
2,660,950,757 | deno | fmt: support formatting css inside styled-tagged template literal | Input script:
```js
const Bar = styled.div`
margin: 1px;
padding: 2px;
`;
```
Prettier output:
```js
const Bar = styled.div`
margin: 1px;
padding: 2px;
`;
```
`deno fmt` output (no change):
```js
const Bar = styled.div`
margin: 1px;
padding: 2px;
`;
```
Deno now has builtin css formatter. I think `deno fmt` should format the css code inside tagged template literals. (prettier seems detecting the embedded language from the tag symbol)
---
Note: This issue comes from the user interview with some enterprise tech team. This is the blocker for them to use `deno fmt` for their frontend code base. | upstream,suggestion,deno fmt | low | Minor |
2,660,972,116 | go | net: TestUDPServer/10 failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestUDPServer/10"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731230418610381825)):
=== RUN TestUDPServer/10
server_test.go:258: udp [::ffff:0.0.0.0]:0<-127.0.0.1
server_test.go:322: server: read udp [::]:64467: i/o timeout
server_test.go:315: client: read udp4 127.0.0.1:52418: i/o timeout
--- FAIL: TestUDPServer/10 (3600.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,661,026,779 | godot | Can't have a null resource in import defaults of EditorImportPlugin | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Windows 10, Vulkan forward +, Nvidia 3070
### Issue description
I'm trying to add an optional resource that can be specified in an importer as a preset to get around the shortcomings of the current preset system for imports mentioned here: https://github.com/godotengine/godot-proposals/issues/8350
Unfortunately, I cannot add an optional preset resource, as it seems impossible to have a resource type in the importer that can be set to null.

In this example, "preset_not_null" behaves as a resource and the editor allows users to specify a different resource or create a new one as normal. "preset_null", however just displays "<null>" and clicking on the box does nothing, even though the hints are set up exactly the same.
If you right click and clear the preset_not_null, it will simply revert back to the default value.
I've also tried setting a "default_value" of Resource, which appears to work, until the .import file is read again, at which point there is an error parsing it and the import options are completely gone:

The errors are
* Class 'GDScriptNativeClass' or its base class cannot be instantiated.
* core\io\config_file.cpp:304 - ConfigFile parse error at res://levels/test_bspx.bsp.import:14: Can't instantiate Object() of type: GDScriptNativeClass.
Not setting the default_value isn't an option, as that errors out.
### Steps to reproduce
Create an importer with a resource type in the importer options defaults, ex:
```gdscript
func _get_import_options(_path : String, preset_index : int):
match preset_index:
Presets.DEFAULT:
return [
{
"name" : "preset_not_null",
"default_value" : preload("res://addons/whatever_import/preset_example.tres"),
"property_hint" : PROPERTY_HINT_RESOURCE_TYPE,
"hint_string" : "WhateverImportPreset"
},
{
"name" : "preset_null",
"default_value" : null,
"property_hint" : PROPERTY_HINT_RESOURCE_TYPE,
"hint_string" : "WhateverImportPreset"
},
]
_:
return []
```
Note that if the default value is set to null, the property cannot be used as a resource. If a resource is added by default, setting the resource to null will cause the resource to go back to the default value.
### Minimal reproduction project (MRP)
[import_resource_null.zip](https://github.com/user-attachments/files/17770827/import_resource_null.zip)
| bug,topic:editor,topic:import | low | Critical |
2,661,032,289 | vscode | Vscode doesn't respect the login shell on MacOS |
Type: <b>Bug</b>
1. install and config a miniconda.
2. `brew install xonsh`
3. add `/opt/homebrew/bin/xonsh` to `/etc/shells`
4. on conda base env, run `pip install xonsh`
5. add `/opt/homebrew/Caskroom/miniconda/base/bin/xonsh` to `/etc/shells`
6. `chsh -s /opt/homebrew/Caskroom/miniconda/base/bin/xonsh` to change the login shell
7. reboot
8. open iterm2. the shell is `/opt/homebrew/bin/xonsh`, which shows that the login shell is changed.
9. open vscode. open integrated terminal. the terminal looks like a "xonsh", but actually it is `/opt/homebrew/bin/xonsh` instead of the login shell `/opt/homebrew/Caskroom/miniconda/base/bin/xonsh`
And some tries:
1. I try to `chsh -s /bin/bash`, reboot, and `chsh -s /opt/homebrew/Caskroom/miniconda/base/bin/xonsh`. no help.
2. I try to `brew uninstall xonsh`. Then I open vscode terminal. It exits quickly. But the second time I open vscode terminal, It works.
3. I try to remove `/opt/homebrew/bin/xonsh` from `/etc/shells`, reboot. It works.
It seems like vscode deeply relay on /etc/shells to determine which shell to start.
By the way, on MacOS, when I change the login shell, vscode must requires a reboot to notice that the login shell has been changed. This bothers me little because I can accept a reboot. But this time it is a real bug.
---
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Darwin arm64 24.0.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|118, 39, 15|
|Memory (System)|32.00GB (19.20GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (28)</summary>
Extension|Author (truncated)|Version
---|---|---
better-comments|aar|3.0.2
gitlens|eam|16.0.0
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
shell-format|fox|7.2.5
copilot|Git|1.245.0
copilot-chat|Git|0.22.2
vsc-python-indent|Kev|1.18.0
vscode-color-identifiers-mode|Mat|1.3.0
vscode-language-pack-zh-hans|MS-|1.95.2024103009
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.12.0
flake8|ms-|2023.10.0
isort|ms-|2023.13.13171013
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.2
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.115.1
hexeditor|ms-|1.11.1
remote-explorer|ms-|0.4.3
material-icon-theme|PKi|5.14.1
datetime|rid|2.2.2
code-spell-checker|str|3.0.1
sort-lines|Tyr|1.12.0
</details>
<!-- generated by issue reporter --> | bug,macos,linux,confirmation-pending,terminal-profiles | low | Critical |
2,661,044,776 | godot | Disable V-Sync mode, memory leak occurs when running the project. | ### Tested versions
- Reprotducible in Godot v4.3.stable.mono/Godot v4.3.stable
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 32.0.15.6590) - 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 Threads)
### Issue description
Version: Godot v4.3.stable.mono / Godot v4.3.stable
Issue Description: When I set Project -> Project Settings -> Display -> Window -> V-Sync -> Disabled, a memory leak occurs after running the project, with a rate of approximately 2MB every 5 seconds. This bug also occurs even when there is only a single Node2D in the scene. Please refer to the demonstration video for more details.
[video](https://youtu.be/cLDg1aUg20g)
### Steps to reproduce
Steps to reproduce the issue:
1. Go to `Project -> Project Settings -> Display -> Window -> V-Sync -> Disabled`
2. Run any project
3. Observe the memory usage of the process
### Minimal reproduction project (MRP)
"N/A" | bug,topic:rendering | low | Critical |
2,661,060,478 | next.js | Unexpected CSS Module Ordering in Dev/Prod When Using Tree-Shaking | ### Link to the code that reproduces this issue
https://github.com/jantimon/reproduction-webpack-css-order
### To Reproduce
Clone the repository and checkout the `turbo` branch
pnpm install
pnpm run dev
see that the button is blue (but should be orange)
### Current vs. Expected behavior
While analyzing a CSS ordering problem in our monorepo, I traced it down to an interesting combination of module graph building and tree-shaking. The core of the issue appears to be in how the module graph handles CSS imports when `sideEffects: false` is set (or `sideEffects: ["*.css"]`
Looking at webpack's buildChunkGraph.js (https://github.com/webpack/webpack/blob/5e21745e98eb90a029e1f5374d4e4ac338fbe7c7/lib/buildChunkGraph.js#L683-L708), I found that the module traversal order changes once webpack is able to remove a barrel file.
That’s quite a bad DX for most developers because it means that the CSS order changes can be caused by JavaScript refactoring that seems completely unrelated to styles
Here's a concrete example from the reproduction - changing from:
```ts
import { CarouselButton } from '@segments/carousel';
```
to:
```ts
import { CarouselButton } from '@segments/carousel/buttons';
```
can unexpectedly reorder CSS across the entire application. This means that code cleanup like splitting up barrel files or moving components between packages can silently break styles in seemingly unrelated components.
I've done some testing across different bundlers to understand how they handle this scenario:
| Bundler | Consistent CSS Order | CSS Treeshaking | CSS Output |
|---------|-----------------|-----------------|------------|
| webpack | ❌ Order depends on barrel files & sideEffects | ✅ Excludes unused.module.css | [main.css](https://github.com/jantimon/reproduction-webpack-css-order/blob/side-effect/%40applications/base/dist/main.css) |
| vite | ✅ Button → Teaser → TeaserButton | ✅ Excludes unused.module.css | [index.css](https://github.com/jantimon/reproduction-webpack-css-order/blob/vite-with-side-effect/%40applications/base/dist/index.css) |
| parcel | ✅ Button → Teaser → TeaserButton | ✅ Excludes unused.module.css | [index.5ff2b6c6.css](https://github.com/jantimon/reproduction-webpack-css-order/blob/parcel-with-side-effect/%40applications/base/dist/index.5ff2b6c6.css) |
| turbopack | ❌ Order depends on barrel files & sideEffects | N/A (no production build tested) | N/A |
What's interesting is that both Vite and Parcel manage to maintain consistent CSS ordering while still being able to tree-shake. So we might be able to find a middle ground that keeps the benefits of tree-shaking and allows a consistent CSS order
To better understand the issue, I've created a minimal reproduction: https://github.com/jantimon/reproduction-webpack-css-order
The tricky part is that this only manifests when several conditions align:
```ts
// @libraries/teaser/src/teaser.ts
import { CarouselButton } from '@segments/carousel'; // via barrel file
import styles from './teaser.module.css';
```
When building with `sideEffects: false`, the CSS order becomes unpredictable. Here's the output:
```css
.hDE5PT5V3QGAPX9o9iZl { ... }
.yqrxTjAG22vkATE1VjR9 { background-color: orange; } /* Should be last */
.R_y25aX9lTSLQtlxA1c9 { ... }
```
Here are the module graphs for the 3 scenarios.
The postOrder is the index which is used for the css order:
[`sideEffects: true` example](https://github.com/jantimon/reproduction-webpack-css-order/tree/turbo-side-effects-true/%40applications/base)
```mermaid
graph TD
subgraph "sideEffects: true ✅"
A2["@applications/base/src/index.ts preOrder: 0, postOrder: 8"]
B2["@libraries/teaser/src/index.ts preOrder: 1, postOrder: 7"]
C2["@libraries/teaser/src/teaser.ts preOrder: 2, postOrder: 6"]
D2["@segments/carousel/src/index.ts preOrder: 3, postOrder: 3"]
E2["@segments/carousel/src/buttons.ts preOrder: 4, postOrder: 2"]
F2["@segments/carousel/src/button.module.css preOrder: 5, postOrder: 1"]
G2["button.module.css|0|||}} preOrder: 6, postOrder: 0"]
H2["@libraries/teaser/src/teaser.module.css preOrder: 7, postOrder: 5"]
I2["teaser.module.css|0|||}} preOrder: 8, postOrder: 4"]
A2 --> B2
B2 --> C2
C2 --> D2
D2 --> E2
E2 --> F2
F2 --> G2
C2 --> H2
H2 --> I2
style A2 fill:#0a0a4a,stroke:#333
style F2 fill:#294b51,stroke:#333
style G2 fill:#294b51,stroke:#333
style H2 fill:#294b51,stroke:#333
style I2 fill:#294b51,stroke:#333
end
```
[no barrel example](https://github.com/jantimon/reproduction-webpack-css-order/tree/turbo-no-barrel/%40applications/base)
```mermaid
graph TD
subgraph "No Barrel ✅"
A3["@applications/base/src/index.ts preOrder: 0, postOrder: 6"]
B3["@libraries/teaser/src/teaser.ts preOrder: 1, postOrder: 5"]
E3["@segments/carousel/src/buttons.ts preOrder: 2, postOrder: 2"]
F3["@segments/carousel/src/button.module.css preOrder: 3, postOrder: 1"]
G3["button.module.css|0|||}} preOrder: 4, postOrder: 0"]
H3["@libraries/teaser/src/teaser.module.css preOrder: 5, postOrder: 4"]
I3["teaser.module.css|0|||}} preOrder: 6, postOrder: 3"]
A3 --> B3
B3 --> E3
E3 --> F3
F3 --> G3
B3 --> H3
H3 --> I3
style A3 fill:#0a0a4a,stroke:#333
style F3 fill:#294b51,stroke:#333
style G3 fill:#294b51,stroke:#333
style H3 fill:#294b51,stroke:#333
style I3 fill:#294b51,stroke:#333
end
```
[`sideEffects:false` example](https://github.com/jantimon/reproduction-webpack-css-order/tree/turbo/%40applications/base)
```mermaid
graph TD
subgraph "sideEffects: false ❌"
A1["@applications/base/src/index.ts preOrder: 0, postOrder: 6"]
B1["@libraries/teaser/src/teaser.ts preOrder: 1, postOrder: 5"]
C1["@libraries/teaser/src/teaser.module.css preOrder: 2, postOrder: 1"]
D1["teaser.module.css|0|||}} preOrder: 3, postOrder: 0"]
E1["@segments/carousel/src/buttons.ts preOrder: 4, postOrder: 4"]
F1["@segments/carousel/src/button.module.css preOrder: 5, postOrder: 3"]
G1["button.module.css|0|||}} preOrder: 6, postOrder: 2"]
A1 --> B1
B1 --> C1
C1 --> D1
B1 --> E1
E1 --> F1
F1 --> G1
style A1 fill:#0a0a4a,stroke:#333
style C1 fill:#294b51,stroke:#333
style D1 fill:#294b51,stroke:#333
style F1 fill:#294b51,stroke:#333
style G1 fill:#294b51,stroke:#333
end
```
For me common suggestions like "just use Tailwind" or "increase specificity" miss the point - vanilla CSS with simple, understandable selectors should be an option. The unpredictable ordering creates harder to read code where developers need to constantly guard against CSS specificity bugs using `&&&` or `!important`.
The reproduction repo includes branches for different scenarios and bundlers, making it easy to verify the behavior:
- webpack with barrel files: https://github.com/jantimon/reproduction-webpack-css-order/tree/main
- webpack without barrel files: https://github.com/jantimon/reproduction-webpack-css-order/tree/no-barell
- For comparison vite & parcel branches
### Provide environment information
```bash
any
```
### Which area(s) are affected? (Select all that apply)
Turbopack, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
Related webpack issue:
https://github.com/webpack/webpack/issues/18961 | linear: next,linear: turbopack,CSS | low | Critical |
2,661,082,158 | excalidraw | Support converting line to arrow | I have made some lines, rotated and aligned them nicely, but now I realize I wanted them to be arrows instead. Could an option to directly convert the line to an arrow be added, to avoid having to redraw? Maybe the arrow heads option could be added to lines as well instead of a dedicated option to convert?
(Tried looking for this everywhere, but couldn't figure it out, apologies if this is already supported!) | enhancement,UX/UI | low | Minor |
2,661,152,296 | tauri | [feat] Are there plans to support hot updates? | ### Describe the problem
Hot updates are a very common way to update apps on mobile devices. It is often unwise to ask users to re-download the entire app to fix a minor issue. Hot updates have many advantages over [full updates](https://v2.tauri.app/plugin/updater).
### Describe the solution you'd like
This functionality is similar to [eas-update](https://docs.expo.dev/eas-update/introduction) or [code-push](https://learn.microsoft.com/en-us/appcenter/distribution/codepush).
### Alternatives considered
_No response_
### Additional context
Almost all technology stacks that use `js|html|css` for application development on mobile devices support this function. | type: feature request | low | Minor |
2,661,250,366 | kubernetes | The kube-apiserver (with 3 etcd endpints by --etcd-servers)still connect the unhealthy etcd member when we shut down one master node(which has one etcd static pods) | ### What happened?
1. we use the kube-apiserver connect to etcd by three members as :
```
"etcd-servers": [
"https://10.255.69.14:2379",
"https://10.255.69.15:2379",
"https://10.255.69.16:2379",
"https://localhost:2379"
],
```
2. Shut down the master3 node(10.255.69.16), which has both etcd and apiserver static pods
3. Found the master1(10.255.69.14) apiserver and master2(10.255.69.15) apiserver still connect to the unhealthy 10.255.69.16 etcd endpoint
### What did you expect to happen?
When we shutdown one etcd member, the kube-apiserver should quickly switch to the healthy etcd endpoint
### How can we reproduce it (as minimally and precisely as possible)?
1 set the apiserver connect etcd by --etcd-servers ,which has 3 members
2 shut down the master(not reboot)
### Anything else we need to know?
- kube-apiserver logs show as follows:
```
apiserver
2024-11-15T16:08:24.340356305+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:24.340Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc01402e000/10.255.69.14:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
2024-11-15T16:08:24.340460558+08:00 stderr F I1115 08:08:24.340378 20 healthz.go:257] etcd check failed: readyz
2024-11-15T16:08:24.340460558+08:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded
2024-11-15T16:08:24.340541843+08:00 stderr F E1115 08:08:24.340479 20 timeout.go:141] post-timeout activity - time-elapsed: 1.001696115s, GET "/readyz" result: <nil>
2024-11-15T16:08:27.369028487+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:27.368Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00f856000/10.255.69.14:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
2024-11-15T16:08:27.369028487+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:27.368Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00f856000/10.255.69.14:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
2024-11-15T16:08:28.614097370+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:28.613Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00
20b5880/10.255.69.14:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
2024-11-15T16:08:28.614097370+08:00 stderr F I1115 08:08:28.613845 20 trace.go:205] Trace[523486837]: "GuaranteedUpdate etcd3" audit-id:dbab0d30-2e4f-4604-b367-c84f000f1f86,key:/configmaps/kube-system/kube-controller-manag
er,type:*core.ConfigMap (15-Nov-2024 08:08:23.614) (total time: 4999ms):
2024-11-15T16:08:28.614097370+08:00 stderr F Trace[523486837]: ---"Txn call finished" err:context deadline exceeded 4998ms (08:08:28.613)
2024-11-15T16:08:28.614097370+08:00 stderr F Trace[523486837]: [4.999504982s] [4.999504982s] END
2024-11-15T16:08:28.616421774+08:00 stderr F I1115 08:08:28.616312 20 trace.go:205] Trace[1123415286]: "Update" url:/api/v1/namespaces/kube-system/configmaps/kube-controller-manager,user-agent:kube-controller-manager/v1.25.8 (linux/amd64) kubernetes/594da2b/leader-election,audit-id:dbab0d30-2e4f-4604-b367-c84f000f1f86,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Nov-2024 08:08:23.614) (total time: 5002ms):
2024-11-15T16:08:28.616421774+08:00 stderr F Trace[1123415286]: ---"Write to database call finished" len:535,err:Timeout: request did not complete within requested timeout - context deadline exceeded 4999ms (08:08:28.613)
2024-11-15T16:08:28.616421774+08:00 stderr F Trace[1123415286]: [5.002050748s] [5.002050748s] END
2024-11-15T16:08:28.616890678+08:00 stderr F E1115 08:08:28.616631 20 timeout.go:141] post-timeout activity - time-elapsed: 2.794599ms, PUT "/api/v1/namespaces/kube-system/configmaps/kube-controller-manager" result: <nil>
2024-11-15T16:08:30.058932458+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:30.058Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc010bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
2024-11-15T16:08:32.011392068+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.011Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00bd06fc0/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:50894->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012062393+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.011Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0a4efcc40/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:44404->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012211954+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.011Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc010bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012211954+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.012Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc010bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012226796+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.012Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc010bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012226796+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.012Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc010bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012226796+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:32.012Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc010bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.012226796+08:00 stderr F I1115 08:08:32.012185 20 trace.go:205] Trace[1958868955]: "GuaranteedUpdate etcd3" audit-id:28031557-60b2-4f4f-ad50-aa00b6244ed5,key:/leases/gtm/cell.gtm.io,type:*coordination.Lease (15-Nov-2024 08:08:25.595) (total time: 6416ms):
2024-11-15T16:08:32.012513501+08:00 stderr F Trace[891349870]: ---"Txn call finished" err:rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out 9758ms (08:08:32.012)
2024-11-15T16:08:32.012513501+08:00 stderr F Trace[891349870]: [9.75949209s] [9.75949209s] END
2024-11-15T16:08:32.012513501+08:00 stderr F E1115 08:08:32.012404 20 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc0bc1a7f98)}: rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out
2024-11-15T16:08:32.012736479+08:00 stderr F I1115 08:08:32.012514 20 trace.go:205] Trace[859168237]: "GuaranteedUpdate etcd3" audit-id:71717ed4-0c00-458a-8897-0d9df2388bfb,key:/leases/machine-config-operator/machine-config,type:*coordination.Lease (15-Nov-2024 08:08:23.523) (total time: 8489ms):
2024-11-15T16:08:32.012736479+08:00 stderr F Trace[859168237]: ---"Txn call finished" err:rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed ou
t 8488ms (08:08:32.012)
2024-11-15T16:08:32.012736479+08:00 stderr F Trace[859168237]: [8.48927931s] [8.48927931s] END
0bd0e00/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:32.013502285+08:00 stderr F I1115 08:08:32.012231 20 trace.go:205] Trace[2108509803]: "GuaranteedUpdate etcd3" audit-id:3635e940-38e3-41ca-987a-926cc584384a,key:/leases/envoy-gateway-system/5b9825d2.gateway.envoyproxy.io,type:*coordination.Lease (15-Nov-2024 08:08:30.258) (total time: 1753ms):
2024-11-15T16:08:32.013502285+08:00 stderr F Trace[2108509803]: ---"Txn call finished" err:rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:45274->10.255.69.16:2379: read: connection timed out 1752ms (08:08:32.012)
2024-11-15T16:08:32.013502285+08:00 stderr F Trace[2108509803]: [1.753630806s] [1.753630806s] END
2024-11-15T16:08:34.699497756+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:34.699Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00f16d880/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:50918->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:34.699497756+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:34.699Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc007d40000/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.255.69.15:47940->10.255.69.16:2379: read: connection timed out"}
2024-11-15T16:08:35.070903277+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:35.070Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0a2a90700/10.255.69.14:2379","attempt":0,"error":"rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout"}
2024-11-15T16:08:35.082401449+08:00 stderr F W1115 08:08:35.082223 20 logging.go:59] [core] [Channel #254 SubChannel #257] grpc: addrConn.createTransport failed to connect to {
2024-11-15T16:08:35.082401449+08:00 stderr F "Addr": "10.255.69.16:2379",
2024-11-15T16:08:35.082401449+08:00 stderr F "ServerName": "10.255.69.16",
2024-11-15T16:08:35.082401449+08:00 stderr F "Attributes": null,
2024-11-15T16:08:35.082401449+08:00 stderr F "BalancerAttributes": null,
2024-11-15T16:08:35.082401449+08:00 stderr F "Type": 0,
2024-11-15T16:08:35.082401449+08:00 stderr F "Metadata": null
2024-11-15T16:08:35.082401449+08:00 stderr F }. Err: connection error: desc = "transport: Error while dialing dial tcp 10.255.69.16:2379: connect: connection timed out"
2024-11-15T16:08:35.082401449+08:00 stderr F W1115 08:08:35.082242 20 logging.go:59] [core] [Channel #977 SubChannel #980] grpc: addrConn.createTransport failed to connect to {
2024-11-15T16:08:35.082401449+08:00 stderr F "Addr": "10.255.69.16:2379",
2024-11-15T16:08:35.082401449+08:00 stderr F "ServerName": "10.255.69.16",
2024-11-15T16:08:35.082401449+08:00 stderr F "Attributes": null,
2024-11-15T16:08:35.082401449+08:00 stderr F "BalancerAttributes": null,
2024-11-15T16:08:35.082401449+08:00 stderr F "Type": 0,
2024-11-15T16:08:35.082401449+08:00 stderr F "Metadata": null
2024-11-15T16:08:35.082401449+08:00 stderr F }. Err: connection error: desc = "transport: Error while dialing dial tcp 10.255.69.16:2379: connect: connection timed out"
```
- The etcd logs show as follows:
```
etcd
2024-11-15T16:00:03.089125997+08:00 stderr F {"level":"info","ts":"2024-11-15T08:00:03.088971Z","caller":"fileutil/purge.go:85","msg":"purged","path":"/var/lib/etcd/member/wal/00000000000014a6-0000000005fdfcae.wal"}
2024-11-15T16:01:04.261555789+08:00 stderr F {"level":"info","ts":"2024-11-15T08:01:04.261443Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":91749871}
2024-11-15T16:01:05.254012681+08:00 stderr F {"level":"info","ts":"2024-11-15T08:01:05.253911Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":91749871,"took":"966.045196ms","hash":576483144}
2024-11-15T16:01:05.254012681+08:00 stderr F {"level":"info","ts":"2024-11-15T08:01:05.253986Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":576483144,"revision":91749871,"compact-revision":91733961}
2024-11-15T16:04:09.827516612+08:00 stderr F {"level":"info","ts":"2024-11-15T08:04:09.827399Z","caller":"wal/wal.go:785","msg":"created a new WAL segment","path":"/var/lib/etcd/member/wal/00000000000014ac-0000000005ff8643.wal"
}
2024-11-15T16:04:33.101219829+08:00 stderr F {"level":"info","ts":"2024-11-15T08:04:33.101107Z","caller":"fileutil/purge.go:85","msg":"purged","path":"/var/lib/etcd/member/wal/00000000000014a7-0000000005fe3968.wal"}
2024-11-15T16:06:04.278242322+08:00 stderr F {"level":"info","ts":"2024-11-15T08:06:04.278114Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":91765934}
2024-11-15T16:06:05.109317576+08:00 stderr F {"level":"info","ts":"2024-11-15T08:06:05.109193Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":91765934,"took":"803.717455ms","h
ash":163059895}
2024-11-15T16:06:05.109317576+08:00 stderr F {"level":"info","ts":"2024-11-15T08:06:05.109259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":163059895,"revision":91765934,"compact-revision":91749871}
2024-11-15T16:08:15.432491578+08:00 stderr F 2024/11/15 08:08:15 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2024-11-15T16:08:28.472483552+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:28.472346Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f7585d3c6edd2214","rtt":"8.629741ms","error":"dial tcp 10.255.69.16:2380: i/o timeout"}
2024-11-15T16:08:33.472610748+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:33.472502Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f7585d3c6edd2214","rtt":"8.629741ms","error":"dial tcp 10.255.69.16:2380: i/o timeout"}
2024-11-15T16:08:38.473043376+08:00 stderr F {"level":"warn","ts":"2024-11-15T08:08:38.472799Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f7585d3c6edd2214","rtt":"8.629741ms","error":"dial tcp 10.255.69.16:2380: i/o timeout"}
```
```
1. as we can see ,the 10.255.69.16 is unhealthy status at 15T08:08:28.472346Z
2. but the apiserver still try to connect to this unhealthy member at 2024-11-15T16:08:32
```
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.25.8
</details>
### Cloud provider
<details>
no
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
5.15.131
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Critical |
2,661,252,783 | rust | ICE: `is not a pointer or reference type` | <!--
[31mICE[0m: Rustc ./a.rs '-Zvalidate-mir -Zinline-mir -Zinline-mir-threshold=300 -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: compiler/rustc_middle/src/mir/tcx.rs:294:41: Type std::slice::Iter<'{erased}, u8> is not a pointer or reference type', 'error: internal compiler error: compiler/rustc_middle/src/mir/tcx.rs:294:41: Type std::slice::Iter<'{erased}, u8> is not a pointer or reference type'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Zvalidate-mir -Zinline-mir -Zinline-mir-threshold=300
trait Foo: Sized {
fn foo(self) {}
}
trait Bar: Sized {
fn bar(self) {}
}
struct S;
impl<'l> Foo for &'l S {}
impl<T: Foo> Bar for T {
fn bar() {
let _ = "Hello".bytes().nth(3);
}
}
fn main() {
let s = S;
s.foo();
s.bar();
}
````
original:
````rust
//@ run-pass
trait Foo: Sized {
fn foo(self) {}
}
trait Bar: Sized {
fn bar(self) {}
}
struct S;
impl<'l> Foo for &'l S {}
impl<T: Foo> Bar for T {
//! See <https://rust-lang.github.io/rust-clippy/master/index.html>
#[expect(clippy::almost_swapped)]
fn foo() {
let mut a = 0;
let mut b = 9;
a = b;
b = a;
}
#[expect(clippy::bytes_nth)]
fn bar() {
let _ = "Hello".bytes().nth(3);
}
#[expect(clippy::if_same_then_else)]
fn baz() {
let _ = if true {
42
} else {
42
};
}
#[expect(clippy::logic_bug)]
fn burger() {
let a = false;
let b = true;
if a && b || a {}
}
}
fn main() {
let s = S;
s.foo();
(&s).bar();
s.bar();
}
````
Version information
````
rustc 1.84.0-nightly (251dc8ad8 2024-11-15)
binary: rustc
commit-hash: 251dc8ad84492c792a7600d8c5fef2ec868a36a7
commit-date: 2024-11-15
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/251dc8ad84492c792a7600d8c5fef2ec868a36a7/compiler/rustc_middle/src/mir/tcx.rs#L288-L300
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zvalidate-mir -Zinline-mir -Zinline-mir-threshold=300`
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0186]: method `bar` has a `self` declaration in the trait, but not in the impl
--> /tmp/icemaker_global_tempdir.WRkiEaQifFT5/rustc_testrunner_tmpdir_reporting.09fRBzC5BHPB/mvce.rs:14:5
|
6 | fn bar(self) {}
| ------------ `self` used in trait
...
14 | fn bar() {
| ^^^^^^^^ expected `self` in impl
error: internal compiler error: compiler/rustc_middle/src/mir/tcx.rs:294:41: Type std::slice::Iter<'{erased}, u8> is not a pointer or reference type
thread 'rustc' panicked at compiler/rustc_middle/src/mir/tcx.rs:294:41:
Box<dyn Any>
stack backtrace:
0: 0x792f57a5a3ba - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h3e8890c320687803
1: 0x792f5820414a - core::fmt::write::h8e02e323e721d5d3
2: 0x792f5962cc51 - std::io::Write::write_fmt::hfa5fc2d5ad51eab4
3: 0x792f57a5a212 - std::sys::backtrace::BacktraceLock::print::hef9ddff43c45c466
4: 0x792f57a5c716 - std::panicking::default_hook::{{closure}}::he4ae1ef11715c038
5: 0x792f57a5c560 - std::panicking::default_hook::h4571154760051e3a
6: 0x792f56ae4281 - std[575dabc3fc23637d]::panicking::update_hook::<alloc[488fcebc54bee2fb]::boxed::Box<rustc_driver_impl[530468506af41d6d]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x792f57a5ce28 - std::panicking::rust_panic_with_hook::h99e29fee3fbc2974
8: 0x792f56b1e5d1 - std[575dabc3fc23637d]::panicking::begin_panic::<rustc_errors[6fe58dffe56bd7e6]::ExplicitBug>::{closure#0}
9: 0x792f56b115a6 - std[575dabc3fc23637d]::sys::backtrace::__rust_end_short_backtrace::<std[575dabc3fc23637d]::panicking::begin_panic<rustc_errors[6fe58dffe56bd7e6]::ExplicitBug>::{closure#0}, !>
10: 0x792f56b0ce1d - std[575dabc3fc23637d]::panicking::begin_panic::<rustc_errors[6fe58dffe56bd7e6]::ExplicitBug>
11: 0x792f56b282e1 - <rustc_errors[6fe58dffe56bd7e6]::diagnostic::BugAbort as rustc_errors[6fe58dffe56bd7e6]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x792f571a86d3 - rustc_middle[c94631e3827a547d]::util::bug::opt_span_bug_fmt::<rustc_span[9778d555244491c7]::span_encoding::Span>::{closure#0}
13: 0x792f5718ef1a - rustc_middle[c94631e3827a547d]::ty::context::tls::with_opt::<rustc_middle[c94631e3827a547d]::util::bug::opt_span_bug_fmt<rustc_span[9778d555244491c7]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x792f5718edab - rustc_middle[c94631e3827a547d]::ty::context::tls::with_context_opt::<rustc_middle[c94631e3827a547d]::ty::context::tls::with_opt<rustc_middle[c94631e3827a547d]::util::bug::opt_span_bug_fmt<rustc_span[9778d555244491c7]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x792f5555b280 - rustc_middle[c94631e3827a547d]::util::bug::bug_fmt
16: 0x792f59c85e9a - <rustc_middle[c94631e3827a547d]::ty::Ty>::pointee_metadata_ty_or_projection.cold
17: 0x792f54f3ac6a - rustc_mir_transform[b6a0096f646ee25e]::validate::validate_types
18: 0x792f59238c1e - <rustc_mir_transform[b6a0096f646ee25e]::validate::Validator as rustc_mir_transform[b6a0096f646ee25e]::pass_manager::MirPass>::run_pass
19: 0x792f563d176f - rustc_mir_transform[b6a0096f646ee25e]::pass_manager::validate_body
20: 0x792f58206585 - rustc_mir_transform[b6a0096f646ee25e]::pass_manager::run_passes_inner
21: 0x792f5873b1c0 - rustc_mir_transform[b6a0096f646ee25e]::optimized_mir
22: 0x792f58739a9d - rustc_query_impl[bf1c9f3baad9eabf]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[bf1c9f3baad9eabf]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 8usize]>>
23: 0x792f585f936a - rustc_query_system[d5846f7a9cb2c23f]::query::plumbing::try_execute_query::<rustc_query_impl[bf1c9f3baad9eabf]::DynamicConfig<rustc_query_system[d5846f7a9cb2c23f]::query::caches::DefIdCache<rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[bf1c9f3baad9eabf]::plumbing::QueryCtxt, false>
24: 0x792f585f891f - rustc_query_impl[bf1c9f3baad9eabf]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
25: 0x792f553aa542 - <rustc_middle[c94631e3827a547d]::ty::context::TyCtxt>::instance_mir
26: 0x792f58746ce4 - rustc_interface[c18496298df4cf52]::passes::run_required_analyses
27: 0x792f5902961e - rustc_interface[c18496298df4cf52]::passes::analysis
28: 0x792f590295ef - rustc_query_impl[bf1c9f3baad9eabf]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[bf1c9f3baad9eabf]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 1usize]>>
29: 0x792f591dee6e - rustc_query_system[d5846f7a9cb2c23f]::query::plumbing::try_execute_query::<rustc_query_impl[bf1c9f3baad9eabf]::DynamicConfig<rustc_query_system[d5846f7a9cb2c23f]::query::caches::SingleCache<rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[bf1c9f3baad9eabf]::plumbing::QueryCtxt, false>
30: 0x792f591deb4e - rustc_query_impl[bf1c9f3baad9eabf]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
31: 0x792f590d963a - rustc_interface[c18496298df4cf52]::interface::run_compiler::<core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>, rustc_driver_impl[530468506af41d6d]::run_compiler::{closure#0}>::{closure#1}
32: 0x792f59132e50 - std[575dabc3fc23637d]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[c18496298df4cf52]::util::run_in_thread_with_globals<rustc_interface[c18496298df4cf52]::util::run_in_thread_pool_with_globals<rustc_interface[c18496298df4cf52]::interface::run_compiler<core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>, rustc_driver_impl[530468506af41d6d]::run_compiler::{closure#0}>::{closure#1}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>
33: 0x792f5913326b - <<std[575dabc3fc23637d]::thread::Builder>::spawn_unchecked_<rustc_interface[c18496298df4cf52]::util::run_in_thread_with_globals<rustc_interface[c18496298df4cf52]::util::run_in_thread_pool_with_globals<rustc_interface[c18496298df4cf52]::interface::run_compiler<core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>, rustc_driver_impl[530468506af41d6d]::run_compiler::{closure#0}>::{closure#1}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#1} as core[678332cb0ee15b78]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
34: 0x792f59133d39 - std::sys::pal::unix::thread::Thread::new::thread_start::h1176f996a4a1b888
35: 0x792f5a9e639d - <unknown>
36: 0x792f5aa6b49c - <unknown>
37: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (251dc8ad8 2024-11-15) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z validate-mir -Z inline-mir -Z inline-mir-threshold=300 -Z dump-mir-dir=dir
query stack during panic:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0186`.
```
</p>
</details>
<!--
query stack:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
-->
| I-ICE,E-needs-test,T-compiler,C-bug,A-mir-opt,-Zvalidate-mir,S-has-mcve | low | Critical |
2,661,271,537 | rust | ICE: `did not expect inference variables here` | <!--
[31mICE[0m: Rustc ./a.rs '' 'error: internal compiler error: compiler/rustc_middle/src/mir/interpret/queries.rs:105:13: did not expect inference variables here', 'error: internal compiler error: compiler/rustc_middle/src/mir/interpret/queries.rs:105:13: did not expect inference variables here'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
trait Owner {
const C<const N: u32>: u32;
}
impl Owner for () {
;
}
fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
fn main() {
take0::<f32, >(());
}
````
original:
````rust
trait Owner {
const C<const N: u32>: u32;
}
impl Owner for () {
const C<const N: u32>: u32 = N;
}
fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
fn main() {
take0::<f32, {Dimension}>(());
}
````
Version information
````
rustc 1.84.0-nightly (251dc8ad8 2024-11-15)
binary: rustc
commit-hash: 251dc8ad84492c792a7600d8c5fef2ec868a36a7
commit-date: 2024-11-15
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/251dc8ad84492c792a7600d8c5fef2ec868a36a7/compiler/rustc_middle/src/mir/interpret/queries.rs#L99-L111
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error: non-item in item list
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:6:5
|
5 | impl Owner for () {
| - item list starts here
6 | ;
| ^ non-item starts here
7 | }
| - item list ends here
error[E0658]: associated const equality is incomplete
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:9:38
|
9 | fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
| ^^^^^^^^^^^^
|
= note: see issue #92827 <https://github.com/rust-lang/rust/issues/92827> for more information
= help: add `#![feature(associated_const_equality)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-15; consider upgrading it if it is out of date
error[E0658]: generic const items are experimental
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:2:12
|
2 | const C<const N: u32>: u32;
| ^^^^^^^^^^^^^^
|
= note: see issue #113521 <https://github.com/rust-lang/rust/issues/113521> for more information
= help: add `#![feature(generic_const_items)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-15; consider upgrading it if it is out of date
error[E0046]: not all trait items implemented, missing: `C`
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:5:1
|
2 | const C<const N: u32>: u32;
| -------------------------- `C` from trait
...
5 | impl Owner for () {
| ^^^^^^^^^^^^^^^^^ missing `C` in implementation
error: the constant `N` is not of type `u32`
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:9:38
|
9 | fn take0<const N: u64>(_: impl Owner<C<N> = { N }>) {}
| ^^^^^^^^^^^^ expected `u32`, found `u64`
|
note: required by a const generic parameter in `Owner::C`
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:2:13
|
2 | const C<const N: u32>: u32;
| ^^^^^^^^^^^^ required by this const generic parameter in `Owner::C`
error[E0747]: type provided when a constant was expected
--> /tmp/icemaker_global_tempdir.LZacjKgHN0Cn/rustc_testrunner_tmpdir_reporting.fVFwgFbJn0St/mvce.rs:12:13
|
12 | take0::<f32, >(());
| ^^^
|
help: if this generic argument was intended as a const parameter, surround it with braces
|
12 | take0::<{ f32 }, >(());
| + +
error: internal compiler error: compiler/rustc_middle/src/mir/interpret/queries.rs:105:13: did not expect inference variables here
thread 'rustc' panicked at compiler/rustc_middle/src/mir/interpret/queries.rs:105:13:
Box<dyn Any>
stack backtrace:
0: 0x767693e5a3ba - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h3e8890c320687803
1: 0x76769460414a - core::fmt::write::h8e02e323e721d5d3
2: 0x767695a2cc51 - std::io::Write::write_fmt::hfa5fc2d5ad51eab4
3: 0x767693e5a212 - std::sys::backtrace::BacktraceLock::print::hef9ddff43c45c466
4: 0x767693e5c716 - std::panicking::default_hook::{{closure}}::he4ae1ef11715c038
5: 0x767693e5c560 - std::panicking::default_hook::h4571154760051e3a
6: 0x767692ee4281 - std[575dabc3fc23637d]::panicking::update_hook::<alloc[488fcebc54bee2fb]::boxed::Box<rustc_driver_impl[530468506af41d6d]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x767693e5ce28 - std::panicking::rust_panic_with_hook::h99e29fee3fbc2974
8: 0x767692f1e5d1 - std[575dabc3fc23637d]::panicking::begin_panic::<rustc_errors[6fe58dffe56bd7e6]::ExplicitBug>::{closure#0}
9: 0x767692f115a6 - std[575dabc3fc23637d]::sys::backtrace::__rust_end_short_backtrace::<std[575dabc3fc23637d]::panicking::begin_panic<rustc_errors[6fe58dffe56bd7e6]::ExplicitBug>::{closure#0}, !>
10: 0x767692f0ce1d - std[575dabc3fc23637d]::panicking::begin_panic::<rustc_errors[6fe58dffe56bd7e6]::ExplicitBug>
11: 0x767692f282e1 - <rustc_errors[6fe58dffe56bd7e6]::diagnostic::BugAbort as rustc_errors[6fe58dffe56bd7e6]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7676935a86d3 - rustc_middle[c94631e3827a547d]::util::bug::opt_span_bug_fmt::<rustc_span[9778d555244491c7]::span_encoding::Span>::{closure#0}
13: 0x76769358ef1a - rustc_middle[c94631e3827a547d]::ty::context::tls::with_opt::<rustc_middle[c94631e3827a547d]::util::bug::opt_span_bug_fmt<rustc_span[9778d555244491c7]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x76769358edab - rustc_middle[c94631e3827a547d]::ty::context::tls::with_context_opt::<rustc_middle[c94631e3827a547d]::ty::context::tls::with_opt<rustc_middle[c94631e3827a547d]::util::bug::opt_span_bug_fmt<rustc_span[9778d555244491c7]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x76769195b280 - rustc_middle[c94631e3827a547d]::util::bug::bug_fmt
16: 0x7676960ab035 - <rustc_middle[c94631e3827a547d]::ty::context::TyCtxt>::const_eval_resolve_for_typeck.cold
17: 0x76769503e581 - rustc_trait_selection[ae5754708a8b3a46]::traits::try_evaluate_const
18: 0x767694f92383 - <rustc_trait_selection[ae5754708a8b3a46]::traits::normalize::AssocTypeNormalizer as rustc_type_ir[83e8cfe6e15c3590]::fold::TypeFolder<rustc_middle[c94631e3827a547d]::ty::context::TyCtxt>>::fold_const
19: 0x7676928336de - <rustc_trait_selection[ae5754708a8b3a46]::traits::normalize::AssocTypeNormalizer>::fold::<rustc_middle[c94631e3827a547d]::ty::Term>
20: 0x767693d1a9bd - rustc_trait_selection[ae5754708a8b3a46]::traits::normalize::normalize_with_depth_to::<rustc_middle[c94631e3827a547d]::ty::Term>
21: 0x767693d0f792 - <rustc_trait_selection[ae5754708a8b3a46]::error_reporting::TypeErrCtxt>::report_fulfillment_error
22: 0x767693cdc889 - <rustc_trait_selection[ae5754708a8b3a46]::error_reporting::TypeErrCtxt>::report_fulfillment_errors
23: 0x767690d25711 - <rustc_hir_typeck[37bfb0076054673b]::fn_ctxt::FnCtxt>::confirm_builtin_call
24: 0x76769536b6d8 - <rustc_hir_typeck[37bfb0076054673b]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
25: 0x767695365c35 - <rustc_hir_typeck[37bfb0076054673b]::fn_ctxt::FnCtxt>::check_expr_block
26: 0x76769536bfba - <rustc_hir_typeck[37bfb0076054673b]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
27: 0x76769485789c - rustc_hir_typeck[37bfb0076054673b]::check::check_fn
28: 0x76769484d2ec - rustc_hir_typeck[37bfb0076054673b]::typeck
29: 0x76769484cc93 - rustc_query_impl[bf1c9f3baad9eabf]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[bf1c9f3baad9eabf]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 8usize]>>
30: 0x767694d12001 - rustc_query_system[d5846f7a9cb2c23f]::query::plumbing::try_execute_query::<rustc_query_impl[bf1c9f3baad9eabf]::DynamicConfig<rustc_query_system[d5846f7a9cb2c23f]::query::caches::VecCache<rustc_span[9778d555244491c7]::def_id::LocalDefId, rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[bf1c9f3baad9eabf]::plumbing::QueryCtxt, false>
31: 0x767694d1048d - rustc_query_impl[bf1c9f3baad9eabf]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
32: 0x767694d10107 - <rustc_middle[c94631e3827a547d]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[19f2f9b7f430108f]::check_crate::{closure#4}>::{closure#0}
33: 0x767694d0e0d9 - rustc_hir_analysis[19f2f9b7f430108f]::check_crate
34: 0x767694b44fca - rustc_interface[c18496298df4cf52]::passes::run_required_analyses
35: 0x76769542961e - rustc_interface[c18496298df4cf52]::passes::analysis
36: 0x7676954295ef - rustc_query_impl[bf1c9f3baad9eabf]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[bf1c9f3baad9eabf]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 1usize]>>
37: 0x7676955dee6e - rustc_query_system[d5846f7a9cb2c23f]::query::plumbing::try_execute_query::<rustc_query_impl[bf1c9f3baad9eabf]::DynamicConfig<rustc_query_system[d5846f7a9cb2c23f]::query::caches::SingleCache<rustc_middle[c94631e3827a547d]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[bf1c9f3baad9eabf]::plumbing::QueryCtxt, false>
38: 0x7676955deb4e - rustc_query_impl[bf1c9f3baad9eabf]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
39: 0x7676954d963a - rustc_interface[c18496298df4cf52]::interface::run_compiler::<core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>, rustc_driver_impl[530468506af41d6d]::run_compiler::{closure#0}>::{closure#1}
40: 0x767695532e50 - std[575dabc3fc23637d]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[c18496298df4cf52]::util::run_in_thread_with_globals<rustc_interface[c18496298df4cf52]::util::run_in_thread_pool_with_globals<rustc_interface[c18496298df4cf52]::interface::run_compiler<core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>, rustc_driver_impl[530468506af41d6d]::run_compiler::{closure#0}>::{closure#1}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>
41: 0x76769553326b - <<std[575dabc3fc23637d]::thread::Builder>::spawn_unchecked_<rustc_interface[c18496298df4cf52]::util::run_in_thread_with_globals<rustc_interface[c18496298df4cf52]::util::run_in_thread_pool_with_globals<rustc_interface[c18496298df4cf52]::interface::run_compiler<core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>, rustc_driver_impl[530468506af41d6d]::run_compiler::{closure#0}>::{closure#1}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[678332cb0ee15b78]::result::Result<(), rustc_span[9778d555244491c7]::ErrorGuaranteed>>::{closure#1} as core[678332cb0ee15b78]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
42: 0x767695533d39 - std::sys::pal::unix::thread::Thread::new::thread_start::h1176f996a4a1b888
43: 0x767696d5f39d - <unknown>
44: 0x767696de449c - <unknown>
45: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (251dc8ad8 2024-11-15) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 7 previous errors
Some errors have detailed explanations: E0046, E0658, E0747.
For more information about an error, try `rustc --explain E0046`.
```
</p>
</details>
<!--
query stack:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
-->
| I-ICE,T-compiler,C-bug,A-const-generics,E-needs-mcve,S-bug-has-test,S-has-bisection | low | Critical |
2,661,290,681 | angular | Signal boolean check without getter inside template causes silent failure | ### Which @angular/* package(s) are relevant/related to the feature request?
language-service
### Description
When using Angular signals in template conditions, failing to call the getter (missing parentheses) results in a silent failure with no type checking or linting errors.
```typescript
export class SomeComponent {
showHeadline = signal<boolean>(false);
headline = signal<string>('Hello World!');
}
@if (showHeadline) { // not calling the getter is truthy
<h1>{{ headline() }} </h1>
}
```
This can lead to subtle bugs where conditional rendering always evaluates to true, regardless of the signal's actual value.
### Proposed solution
The Angular Language Service should:
- Show a type error or warning when a signal is used without its getter in conditional statements
### Alternatives considered
- Suggest adding parentheses to access the signal's value | area: compiler,compiler: extended diagnostics,cross-cutting: signals | low | Critical |
2,661,291,513 | bitcoin | Discover() will not run if listening on any address with an explicit bind=0.0.0.0 | ### Current behaviour
https://github.com/bitcoin/bitcoin/blob/85bcfeea23568053ea09013fb8263fa1511d7123/src/init.cpp#L1890-L1892
`Discover()` will run only if we are listening on all addresses (`bind_on_any` is `true`). However if `-bind=0.0.0.0:port` is explicitly given, then `bind_on_any` will end up being `false` and thus `Discover()` will not run when it should.
### Expected behaviour
Discover own addresses even if `-bind=0.0.0.0:port` is given.
### Steps to reproduce
Use `-bind=0.0.0.0:port`.
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
master@85bcfeea23568053ea09013fb8263fa1511d7123
### Operating system and version
Windows 3.11
### Background
See https://github.com/bitcoin/bitcoin/issues/31133#issuecomment-2477231557 | P2P,good first issue | low | Major |
2,661,299,318 | PowerToys | Failed to load WinUI3Apps/PowerToys.<various>.dll | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Installer
### Steps to reproduce
I permormend from within PowerToys and upgrade from a most recent version which then failed with many PowerToys errors for many DLLs, one of these being captured to illustrate.
### ✔️ Expected Behavior
Install the seamlessly install and upgrade a fully working PowerToys app!
### ❌ Actual Behavior
Installer ran and then failed with multiple errors.
### Other Software

| Issue-Bug,Needs-Triage | low | Critical |
2,661,304,717 | angular | Add `outputFromSignal` function or one-way model | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
Note: this was suggested before in #56923 but I want to argue the case again.
In the current Angular signal ecosystem the lack of a `outputFromSignal` function or something feels very unnatural.
The argument against it is that mostly that "signals are for state and outputs for events".
However this distinction is not so strictly enforced in the rest of the ecosystem:
- We get two-way binding using `model`.
- One can convert a signal to an Observable with `toObservable` and create an output from an Observable with `outputFromObservable`. You could argue that this is needed because Observables can both be used for events and state, but that in itself is an argument against being strict here.
Another way to look at it is:
- We have one way binding into a child component using `input`.
- We have two way binding with a child component using `model`.
- We don't have a signal way of one way binding out of the child component.
### Proposed solution
Implement `outputFromSignal` or something similar.
### Alternatives considered
Other patterns (effect, service etc). But for this simple case they are unnecessarily unwieldy and complicated. | area: core,core: reactivity,cross-cutting: signals | medium | Major |
2,661,309,244 | pytorch | Memory usage increase post compilation for torch.compile ViT-H-14-quickgelu_dfn5b | ### 🐛 Describe the bug
I'm running inference with torch compile for OpenCLIP model ViT-H-14-quickgelu_dfn5b, with half precision and on "cuda" device in "max-autotune" mode. My CPU memory usage has increased by 900MB-1GB as compared to the same model without torch compile. The increase persists during runtime after compilation and doesn't seem to reduce, see the plot below. Hypothesis is that the increase is due to the compilation matadata and alternate graphs.
I have a resource constrained setup, how do we solve this? What are the compilation settings we can use or steps we can add post compile to remove the built up CPU memory. Please refer to the code below and the logs attached.
Code:
```
self.model = torch.load(model_weight_path)
self.model.to(device)
if use_torch_compile and device == "cuda":
try:
self.model = torch.compile(self.model, mode="max-autotune")
except Exception as e:
logging.warning(
f"Failed to use torch compile: {e}, "
"falling back to no torch compile"
)
if fp16:
self.model.half()
self.model.eval()
```
Following I have already tried with no luck:
1. `self.model = torch.compile(self.model, mode="reduced-overhead", fullgraph=True)`
2. `self.model = torch.compile(self.model, fullgraph=True)`
3. Post compile cleanup using:
```
# Cleanup compilation artifacts
import gc
import torch._dynamo
torch._dynamo.reset()
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
```
<img width="1316" alt="image" src="https://github.com/user-attachments/assets/06c5ca4a-7cc7-44cc-b008-10cd8a1ee83d">
### Error logs
```
nvert: [INFO] Step 1: torchdynamo start tracing forward
[2024-11-15 09:01:05,575] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2024-11-15 09:01:05,596] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2024-11-15 09:01:09,743] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 0
[2024-11-15 09:01:09,810] torch._inductor.utils: [WARNING] not enough cuda cores to use max_autotune mode
[2024-11-15 09:01:09,815] torch._inductor.graph: [INFO] Creating implicit fallback for:
target: aten._scaled_dot_product_efficient_attention.default
args[0]: TensorBox(
View(
PermuteView(data=View(
View(
SliceView(
StorageBox(
Pointwise(
'cuda',
torch.float16,
tmp0 = load(buf13, i3 + 1280 * i0 + 3840 * i1)
return tmp0
,
ranges=[3, 257, 1, 1280],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
)
),
size=[1, 257, 1, 1280],
reindex=lambda i0, i1, i2, i3: [i0, i1, i2, i3],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, view_3, arg1_1, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, select, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
),
size=(257, 1, 1280),
reindex=lambda i0, i1, i2: [0, i0, 0, i2],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, view_3, arg1_1, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, select, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
),
size=(257, 16, 80),
reindex=lambda i0, i1, i2: [i0, 0, 80*i1 + i2],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, view_5, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, select, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
), dims=[1, 0, 2]),
size=(1, 16, 257, 80),
reindex=lambda i0, i1, i2, i3: [i1, i2, i3],
origins={permute_4, convert_element_type_1, addmm, mul_2, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, view_5, view_8, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, select, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
)
)
args[1]: TensorBox(
View(
PermuteView(data=View(
View(
SliceView(
StorageBox(
Pointwise(
'cuda',
torch.float16,
tmp0 = load(buf13, i3 + 1280 * i0 + 3840 * i1)
return tmp0
,
ranges=[3, 257, 1, 1280],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
)
),
size=[1, 257, 1, 1280],
reindex=lambda i0, i1, i2, i3: [i0 + 1, i1, i2, i3],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, view_3, arg1_1, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, select_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
),
size=(257, 1, 1280),
reindex=lambda i0, i1, i2: [0, i0, 0, i2],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, view_3, arg1_1, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, select_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
),
size=(257, 16, 80),
reindex=lambda i0, i1, i2: [i0, 0, 80*i1 + i2],
origins={convert_element_type_1, view_6, addmm, mul_2, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, select_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
), dims=[1, 0, 2]),
size=(1, 16, 257, 80),
reindex=lambda i0, i1, i2, i3: [i1, i2, i3],
origins={convert_element_type_1, view_6, addmm, mul_2, convert_element_type_6, permute_5, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, view_9, arg3_1, convert_element_type_3, arg2_1, select_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
)
)
args[2]: TensorBox(
View(
PermuteView(data=View(
View(
SliceView(
StorageBox(
Pointwise(
'cuda',
torch.float16,
tmp0 = load(buf13, i3 + 1280 * i0 + 3840 * i1)
return tmp0
,
ranges=[3, 257, 1, 1280],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, squeeze}
)
),
size=[1, 257, 1, 1280],
reindex=lambda i0, i1, i2, i3: [i0 + 2, i1, i2, i3],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, view_3, arg1_1, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, select_2, squeeze}
),
size=(257, 1, 1280),
reindex=lambda i0, i1, i2: [0, i0, 0, i2],
origins={convert_element_type_1, addmm, mul_2, convert_element_type_6, add, view_3, arg1_1, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, select_2, squeeze}
),
size=(257, 16, 80),
reindex=lambda i0, i1, i2: [i0, 0, 80*i1 + i2],
origins={convert_element_type_1, addmm, mul_2, view_7, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, select_2, squeeze}
), dims=[1, 0, 2]),
size=(1, 16, 257, 80),
reindex=lambda i0, i1, i2, i3: [i1, i2, i3],
origins={convert_element_type_1, view_10, addmm, mul_2, view_7, convert_element_type_6, add, arg1_1, view_3, view_2, convert_element_type_7, permute_3, rsqrt_1, arg5_1, add_3, convert_element_type_2, sub_1, var_mean, arg4_1, convert_element_type_5, arg3_1, convert_element_type_3, arg2_1, permute_1, permute_6, convert_element_type_4, var_mean_1, view_4, clone, mul_1, add_2, unsqueeze, arg136_1, convert_element_type_8, rsqrt, permute_2, add_4, arg137_1, mul_3, cat, sub, mul, add_1, select_2, squeeze}
)
)
args[3]: False
[2024-11-15 09:01:09,820] torch._inductor.graph: [INFO] Using FallbackKernel: torch.ops.aten._scaled_dot_product_efficient_attention.default
[2024-11-15 09:01:16,529] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2024-11-15 09:01:16,530] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
```
### Versions
```
python3 collect_env.py
Collecting environment information...
PyTorch version: 2.0.0a0+gitc263bd4
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.30.2
Libc version: glibc-2.31
Python version: 3.10.0 (default, Nov 1 2024, 22:46:53) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A2000 12GB
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700
Stepping: 1
CPU MHz: 1786.068
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 4224.00
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] mypy==1.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] onnx==1.16.0
[pip3] onnxruntime-gpu==1.17.1
[pip3] open-clip-torch==2.24.0
[pip3] torch==2.0.0a0+gitc263bd4
[pip3] torchvision==0.15.1a0+42759b1
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,661,316,038 | node | readFile will not read files larger than 2 GiB even if buffers can be larger | ### Version
v22.11.0
### Platform
```text
Darwin LAMS0127 23.6.0 Darwin Kernel Version 23.6.0: Thu Sep 12 23:36:23 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6031 arm64 arm Darwin
```
### Subsystem
_No response_
### What steps will reproduce the bug?
```javascript
const fs = require("fs/promises");
const FILE = "test.bin";
async function main() {
const buffer1 = Buffer.alloc(3 * 1024 * 1024 * 1024);
await fs.writeFile(FILE, buffer1);
const buffer2 = await fs.readFile(FILE);
// does not reach here
console.log(buffer2.length);
}
main();
```
### How often does it reproduce? Is there a required condition?
It is deterministic.
### What is the expected behavior? Why is that the expected behavior?
readFile should allow for files as large as the max buffer size, as according to the documentation:
> RR_FS_FILE_TOO_LARGE[#](https://nodejs.org/api/errors.html#err_fs_file_too_large)
An attempt has been made to read a file whose size is larger than the maximum allowed size for a Buffer.
https://nodejs.org/api/errors.html#err_fs_file_too_large
In newer node versions, the maximum buffer has increased but the maximum file size is still capped at 2 GiB
In older versions (v18), the max buffer size on 64bit platforms was 4GB, but files cannot be that large either.
### What do you see instead?
`readFile` will throw the error
```
RangeError [ERR_FS_FILE_TOO_LARGE]: File size (3221225472) is greater than 2 GiB
```
### Additional information
_No response_ | fs,good first issue | low | Critical |
2,661,437,520 | go | x/sys/windows: signal.Notify doesn't handle windows.Signal well | ### Go version
go version go1.22.3 windows/amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\qian\AppData\Local\go-build
set GOENV=C:\Users\qian\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\qian\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\qian\go
set GOPRIVATE=
set GOROOT=D:\Environments\go
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=D:\Environments\go\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.22.3
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=0
set GOMOD=NUL
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\qian\AppData\Local\Temp\go-build3528663502=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
When developing a server on Windows, we want to capture the SIGINT signal for extension purposes.
https://go.dev/play/p/wiNlThXDUPT
### What did you see happen?
signal.Notify(c chan<- os.Signal, sig ...os.Signal) cannot correctly capture windows.Signal.
### What did you expect to see?
signal.Notify can correctly handle windows.Signal.
This means that when pressing Ctrl+C, the logic can be correctly triggered. | Thinking,OS-Windows,NeedsInvestigation,compiler/runtime | low | Minor |
2,661,443,401 | react | [DevTools Bug]: Components tab freezes after inspecting | ### Website or app
https://dev.permaplant.net
### Repro steps
1. Login
2. Go to Maps and create or open one
3. Look for the TimelinePicker component in the components tab
4. Click on it to inspect it
After that my components tab freezes and sometimes my RAM fills up endlessly.
(If you need authentication, just contact me)
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | low | Critical |
2,661,458,689 | go | proposal: os: add methods to safely convert between files and roots | ### Proposal Details
I have code which opens both files and directories as files to perform various actions such as fetching and comparing generations (I need to upstream to linux once I write tests) or other metadata with `SyscallConn`.
Trying to use #67002 to make it less conservative with some TOCTOU edge cases prove challenging since `*os.Root` does not provide needed `SyscallConn` to fetch the generation of directories and more architecturally the current codepath is to open everything as a `*os.File` and then use [`statx`](https://man7.org/linux/man-pages/man2/statx.2.html) to decide what to do next.
---
```go
package os
// AsRoot open the File as a Root if it is a directory otherwise it errors.
// The Root is returned with a new lifetime such that each need to be closed independently.
func (*File) AsRoot() (*Root, error)
// AsFile opens the Root as a File.
// The File is returned with a new lifetime such that each need to be closed independently.
func (*Root) AsFile() (*File, error)
``` | Proposal | low | Critical |
2,661,460,519 | pytorch | inconsistency in ```torch.special.xlog1py``` on CPU and GPU | ### 🐛 Describe the bug
inconsistent results of function ```torch.special.xlog1py``` between CPU and GPU
```python #
import torch
self = torch.tensor([[1.9609]], dtype=torch.bfloat16)
other = torch.tensor([[41.0]], dtype=torch.bfloat16)
result_cpu = torch.special.xlog1py(self, other)
self_cuda = self.cuda()
other_cuda = other.cuda()
result_gpu = torch.special.xlog1py(self_cuda, other_cuda)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu)
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-02, rtol=1e-03)
print("Inconsistency with atol=1e-02 and rtol=1e-03:", inconsistent)
```
outputs:
```
CPU result:
tensor([[7.3125]], dtype=torch.bfloat16)
GPU result:
tensor([[7.3438]], device='cuda:0', dtype=torch.bfloat16)
Inconsistency with atol=1e-02 and rtol=1e-03: True
```
### Versions
(executed on google colab)
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.0+cu121
[pip3] torchaudio==2.5.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu121
[conda] Could not collect
cc @mruberry @kshitij12345 | triaged,module: special | low | Critical |
2,661,479,491 | godot | Drag & drop a node in visual shader produces an incorrect label and error | ### Tested versions
- Reproducible in: 4.4 (673f396677654220d7e1d5b6fb5ed3b50126b4e6), 4.3
### System information
Windows 11
### Issue description

### Steps to reproduce
- Create a visual shader and try to drag & drop a node from Add Node dialog
### Minimal reproduction project (MRP)
Too easy to reproduce | bug,topic:editor | low | Critical |
2,661,522,151 | angular | HttpInterceptor can trigger effect recomputation | ### Which @angular/* package(s) are the source of the bug?
common
### Is this a regression?
No
### Description
I have a service with a signal that is used inside interceptor. If we have inside `effect` HttpClient usage, effect will catch signal reads inside interceptor function.
Here is a minimal repro: https://stackblitz.com/edit/stackblitz-starters-o7huam?file=src%2Fmain.ts
I have effect without any signal reads but if in the end of effect function I will print effect producerNode, we will see that effect tracks smth, and that is signal that is read inside interceptor. And http call happens each time when signal inside interceptor changes.
IMHO, this is undocumented behavior of http client + interceptor + effect and it works only in combination. The solution for this behavior will be wrapping signal read with `untracked`.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-o7huam?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
When inside effect I have http call via HttpClient, effect starting to track signal that is read inside interceptors.
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.7
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 18.2.7
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.7
@angular-devkit/build-angular 18.2.7
@angular-devkit/core 18.2.7
@angular-devkit/schematics 18.2.7
@schematics/angular 18.2.7
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
```
### Anything else?
_No response_ | area: core,area: common/http,bug,core: reactivity,cross-cutting: signals | low | Critical |
2,661,526,897 | ui | [bug]: No Spacing in between Toasts | ### Describe the bug
https://github.com/user-attachments/assets/750fb4e5-5374-4934-a351-7f8d81b01803
No spacing when multiple Toasts are shown at once.
### Affected component/components
Toast
### How to reproduce
1. Just followed the instructions here - https://ui.shadcn.com/docs/components/toast
2. Increased the `TOAST_LIMIT` to 3
3. Saw the issue as per the video
### Codesandbox/StackBlitz link
https://github.com/user-attachments/assets/750fb4e5-5374-4934-a351-7f8d81b01803
### Logs
_No response_
### System Info
```bash
Brave Browser, macOS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,661,533,285 | material-ui | [Autocomplete][material-ui] Missing aria-multiselectable attribute when multiple prop is set | ### Steps to reproduce
https://stackblitz.com/edit/vitejs-vite-kcvsmw?file=src%2FApp.tsx&view=editor
### Current behavior
The `Autocomplete` component has `multiple` set to `true`.
React Testing Library [deselectOptions](https://testing-library.com/docs/user-event/utility#-selectoptions-deselectoptions) fails.
The documentation says:
> Selecting multiple options and/or deselecting options of HTMLSelectElement is only possible if [multiple](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select#attr-multiple) is specified.
I expected it to work.
### Expected behavior
We can use `deselectOptions` on an `Autocomplete` when `multiple` is set to `true`.
### Context
Please see: https://stackblitz.com/edit/vitejs-vite-kcvsmw?file=src%2FApp.tsx&view=editor
This issue is similar to https://github.com/mui/material-ui/issues/38631
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Linux 5.0 undefined
Binaries:
Node: 18.20.3 - /usr/local/bin/node
npm: 10.2.3 - /usr/local/bin/npm
pnpm: 8.15.6 - /usr/local/bin/pnpm
Browsers:
Chrome: Not Found
npmPackages:
@emotion/react: 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/core-downloads-tracker: 6.1.7
@mui/material: ^6.1.7 => 6.1.7
@mui/private-theming: 6.1.7
@mui/styled-engine: 6.1.7
@mui/system: 6.1.7
@mui/types: 7.2.19
@mui/utils: 6.1.7
@types/react: ^18.3.12 => 18.3.12
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ~5.6.2 => 5.6.3
```
</details>
**Search keywords**: autocomplete multiple aria-multiselectable | accessibility,component: autocomplete,ready to take | low | Minor |
2,661,560,829 | transformers | FileNotFoundError when using SentenceTransformerTrainingArguments(load_best_model_at_end=True) and Peft | ### System Info
I used google colab default environment, with last version of transformers and sentence-transformers
### Who can help?
@muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the example code as a gist: [gist](https://gist.github.com/GTimothee/cb3551ba6eb14b04f7a06d63ea4616f9)
Just open the gist in a colab notebook and run it
### Expected behavior
This is a follow up about another bug found in sentence-transformers. The sentence-transformers library just integrated peft using transformers.integrations. The bug is that when using SentenceTransformerTrainingArguments(load_best_model_at_end=True) there is a FileNotFoundError as we try to load a classical checkpoint file (pth) but we saved an adapter instead. When looking into the load_best_model function, it just uses the function from the transformers.trainer.Trainer. So we need to modify the transformers library to solve the problem.
The issue is that there is a function in transformers that checks if the model is a PeftMixedModel or not. If not, it is not considered a peft model and the trainer tries to load the model as usual. The problem is our model is a PeftAdapterMixin so it is not recognized as a peft model.
See also: https://github.com/UKPLab/sentence-transformers/issues/3056
In my opinion, we need to add to the check a 2-step check 1) is it a PeftAdapterMixin and 2) has it adapters loaded? Maybe it is only one part of the solution though, and we need a special loading snippet in the transformers.trainer.Trainer._load_best_model directly. | bug | low | Critical |
2,661,607,307 | TypeScript | TS2823 import attributes error when using node16 module | ### 🔎 Search Terms
TS2823 import attributes node16 module
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about 5.6.3 and dev
### ⏯ Playground Link
https://www.typescriptlang.org/play/?target=99&moduleResolution=99&module=100&ts=5.8.0-dev.20241115#code/JYWwDg9gTgLgBGAhgYwNaIOYFMBSBnCAOzgDMoIQ4ByAOgHok1MsaArAwquAd2BgAs4AbzgwAnmCwAuauyJcAvkA
### 💻 Code
```ts
import packageJson from './package.json' with { type: 'json' }
```
### 🙁 Actual behavior
We get the following error:
```
TS2823: Import attributes are only supported when the --module option is set to esnext, nodenext, or preserve
```
### 🙂 Expected behavior
The [docs](https://www.typescriptlang.org/docs/handbook/modules/theory.html#the-module-output-format) state the following:
> [nodenext](https://www.typescriptlang.org/docs/handbook/modules/reference.html#node16-nodenext): Currently identical to node16, but will be a moving target reflecting the latest Node.js versions as Node.js’s module system evolves.
Either the syntax should be supported on `node16` or the documentation needs to be updated.
### Additional information about the issue
_No response_ | Docs | low | Critical |
2,661,610,420 | rust | Tracking issue for collecting `config.toml`s and analysis of config/profile usage patterns and pain points | Contributors often run into friction when using `config.toml` and profiles, and we would like to better understand the different workflows of different contributors and how contributors utilize `config.toml` and profiles to inform future changes and improvements.
### Steps
- [ ] Identify what we want to know about (e.g. pain points, usage patterns) regarding `config.toml`, profiles and their usage patterns and contributor workflows.
- [ ] Figure out a mechanism to collect such info: survey? metrics initiative?
- [ ] Inform contributors that we would like to collect such info.
- [ ] Analysis of the collected information.
- [ ] Identify possible future steps and improvements/changes. | T-bootstrap,C-tracking-issue,E-needs-investigation,A-bootstrap-config | low | Minor |
2,661,641,701 | flutter | Unable to Serve NOTICES File in Flutter Web App Deployed on IIS | I am encountering an issue when deploying a Flutter web application on IIS. The application fails to load the NOTICES file and shows the following error in the browser console:
Uncaught Error: Unable to load asset: "NOTICES".
The NOTICES file is present in the deployment folder, but it doesn't have any file extension. Since the file lacks an extension, IIS does not serve it correctly.
**Environment**
Flutter Version: 3.24v
Dart Version: 3.5.3
Operating System: Windows Server
IIS Version: 10.0.22621.1
Build Command: flutter clean & flutter build web

**What I’ve Tried**
1. Adding MIME types for files without extensions in IIS (Which IIS does not allow)
2. Created the web.config to include:
```
<staticContent>
<mimeMap fileExtension="" mimeType="text/plain" />
</staticContent>
```
3. Renaming the NOTICES file to include an extension (e.g., NOTICES.txt), which failed as the main.dart.js still references the original file name.
**Request**
Please advise on how to handle extensionless files like NOTICES in a Flutter web app deployed on IIS. If this is a limitation of the current Flutter build process, it would be helpful to:
Provide a way to customize the file name or extension of the NOTICES file.
Document any IIS-specific deployment considerations for Flutter web apps.
Thank you for your help!
| a: assets,platform-web,a: release,P2,team-web,triaged-web | low | Critical |
2,661,680,520 | langchain | AzureMLChatOnlineEndpoint not compatible with create_react_agent (NotImplementedError for bind_tool method) | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint, AzureMLEndpointApiType, CustomOpenAIChatContentFormatter
from langchain_community.utilities import SQLDatabase
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain import hub
from langgraph.prebuilt import create_react_agent
if __name__ == "__main__":
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
url = os.environ.get("AZUREML_ENDPOINT_URL")
key = os.environ.get("AZUREML_ENDPOINT_KEY")
timeout = 60 * 5 # default = 60 * 5 = 5 minutes
llm = AzureMLChatOnlineEndpoint(
endpoint_url=url,
endpoint_api_type=AzureMLEndpointApiType.serverless,
endpoint_api_key=key,
content_formatter=CustomOpenAIChatContentFormatter(),
timeout=timeout # default = 60 * 5 = 5 minutes
)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
prompt_template = hub.pull("langchain-ai/sql-agent-system-prompt")
system_message = prompt_template.format(dialect="SQLite", top_k=5)
agent_executor = create_react_agent(llm, toolkit.get_tools(), state_modifier=system_message)
example_query = "Which country's customers spent the most?"
events = agent_executor.stream(
{"messages": [("user", example_query)]},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
```
### Error Message and Stack Trace (if applicable)
```
"C:\Users\...\git\Text2SQL\Text2SQL - ENV\Scripts\python.exe" C:\Users\...\git\Text2SQL\text2sql\main_agent_llama.py
C:\Users\...\git\Text2SQL\Text2SQL - ENV\Lib\site-packages\langsmith\client.py:221: LangSmithMissingAPIKeyWarning: API key must be provided when using hosted LangSmith API
warnings.warn(
Traceback (most recent call last):
File "C:\Users\...\git\Text2SQL\text2sql\main_agent_llama.py", line 571, in <module>
agent_executor = create_react_agent(llm, toolkit.get_tools(), state_modifier=system_message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\Text2SQL\Text2SQL - ENV\Lib\site-packages\langgraph\_api\deprecation.py", line 80, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\Text2SQL\Text2SQL - ENV\Lib\site-packages\langgraph\prebuilt\chat_agent_executor.py", line 512, in create_react_agent
model = cast(BaseChatModel, model).bind_tools(tool_classes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\Text2SQL\Text2SQL - ENV\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1115, in bind_tools
raise NotImplementedError
NotImplementedError
```
### Description
I'm trying to implement a future-ready **SQL Agent** with an open weights model (Meta LLama) hosted into an **Azure AI infrastructure**.
My code is pretty simple and it's just one of the various how-to guides in which I switched from the original OpenAI model to a **AzureMLChatOnlineEndpoint**.
But the execution interrupts pretty soon since the flow relies on the BaseModel bind_tool implementation that just throw a "**NotImplementedError**".
**Is there ant chance to make AzureMLChatOnlineEndpoint compatible with this method?**
Consider that the direction in which LangChain is moving is LangGraph so makes no sense now to create an agent using create_sql_agent from from langchain_community.agent_toolkits
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.6
> langchain_community: 0.3.4
> langsmith: 0.1.138
> langchain_groq: 0.2.1
> langchain_openai: 0.2.4
> langchain_text_splitters: 0.3.1
> langgraph: 0.2.41
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> groq: 0.11.0
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.2
> langgraph-sdk: 0.1.35
> numpy: 1.26.4
> openai: 1.53.0
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
| Ɑ: core | low | Critical |
2,661,752,117 | next.js | Loading with parallel routes only works on root | ### Link to the code that reproduces this issue
https://github.com/bananashell/nextjs-parallel-route-loading-issue
### To Reproduce
1. next dev
2. hard reload localhost:3000 (loading of header is done as expected)
3. hard reload localhost:3000/a
4. loading of header is skipped and entier page results in being sync
### Current vs. Expected behavior
If a loading.tsx is added to the root of a parallel route it only uses that suspense boundary on the root level.
Give this tree
```
/ (root)
@header
page.tsx
default.tsx
loading.tsx
/a
page.tsx
loading.tsx
/b
page.tsx
loading.tsx
```
- navigating to `/` will use the loading.tsx if `@header/page.tsx` is async
- navigating to `/a`directly would not use the loading.tsx for `@header` and prevent streaming from happening
The only solution I've found for this is to manually add a Suspense boundary to `default.tsx`
```tsx
// @header/default.tsx
import HeaderSlot from "./page";
import { Suspense } from "react";
import Loading from "./loading";
export default function DefaultSlot() {
return (
<Suspense fallback={<Loading />}>
<HeaderSlot />
</Suspense>
);
}
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: unknown
Available memory (MB): 36864
Available CPU cores: 14
Binaries:
Node: 22.6.0
npm: N/A
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | bug,Parallel & Intercepting Routes | low | Minor |
2,661,784,543 | storybook | [Bug]: HMR events lead to new WS connections for Storybook's Channel | ### Describe the bug
A file change usually triggers a reload event in Storybook's manager. As soon as this happens, Storybook's manager establishes a new WebSocket connection to the backend.
### Reproduction steps
1. Start Storybook
2. Make a change in a story file or component file
3. Take a look at the network tab. Filter for WS. You can see that a new WS connection has been established
### Additional context
_No response_ | bug | low | Critical |
2,661,816,388 | rust | Mismatched new/delete alignment alloc value: 8 dealloc value: default-aligned | When compiling https://github.com/GuillaumeGomez/mdBook/tree/bug-rustc, I got a segfault. When running under valgrind I get:
```
==563011== Thread 3 rustc:
==563011== Mismatched new/delete alignment alloc value: 8 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x1155C5F6: llvm::PassRegistry::registerPass(llvm::PassInfo const&, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x119FB7D2: initializeReachingDefAnalysisPassOnce(llvm::PassRegistry&) [clone .llvm.14788713002572353718] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11A8CBA8: initializeX86ExecutionDomainFixPassOnce(llvm::PassRegistry&) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11B38BC6: LLVMInitializeX86Target (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xA04714E: rustc_llvm::initialize_available_targets (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0xA046A74: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::init (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BB45: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== Address 0x12eabac0 is 0 bytes inside a block of size 1,024 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x1155F814: llvm::DenseMap<void const*, llvm::PassInfo const*, llvm::DenseMapInfo<void const*, void>, llvm::detail::DenseMapPair<void const*, llvm::PassInfo const*> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x1155C5F6: llvm::PassRegistry::registerPass(llvm::PassInfo const&, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11A28606: _GLOBAL__sub_I_Debugify.cpp (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x4004556: call_init (dl-init.c:74)
==563011== by 0x4004556: call_init (dl-init.c:26)
==563011== by 0x400464C: _dl_init (dl-init.c:121)
==563011== by 0x401CEDF: ??? (in /usr/lib64/ld-linux-x86-64.so.2)
==563011== by 0x59: ???
==563011== by 0x1FFEFFF262: ???
==563011== by 0x1FFEFFF2AD: ???
==563011== by 0x1FFEFFF2BA: ???
==563011== by 0x1FFEFFF2C1: ???
==563011==
==563011== Mismatched new/delete alignment alloc value: 8 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x1155C5F6: llvm::PassRegistry::registerPass(llvm::PassInfo const&, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x119FA87B: initializeAArch64BranchTargetsPassOnce(llvm::PassRegistry&) [clone .llvm.4105488490841716021] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11B36414: LLVMInitializeAArch64Target (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xA04718A: rustc_llvm::initialize_available_targets (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0xA046A74: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::init (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BB45: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== Address 0x13105a70 is 0 bytes inside a block of size 2,048 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x1155F814: llvm::DenseMap<void const*, llvm::PassInfo const*, llvm::DenseMapInfo<void const*, void>, llvm::detail::DenseMapPair<void const*, llvm::PassInfo const*> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x1155C5F6: llvm::PassRegistry::registerPass(llvm::PassInfo const&, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x119FB7D2: initializeReachingDefAnalysisPassOnce(llvm::PassRegistry&) [clone .llvm.14788713002572353718] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11A8CBA8: initializeX86ExecutionDomainFixPassOnce(llvm::PassRegistry&) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11B38BC6: LLVMInitializeX86Target (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xA04714E: rustc_llvm::initialize_available_targets (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0xA046A74: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::init (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched new/delete alignment alloc value: 8 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x1155C5F6: llvm::PassRegistry::registerPass(llvm::PassInfo const&, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11A903A4: initializeRISCVCodeGenPreparePassOnce(llvm::PassRegistry&) [clone .llvm.3021935745327874363] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11B382DE: LLVMInitializeRISCVTarget (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xA047298: rustc_llvm::initialize_available_targets (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0xA046A74: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::init (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BB45: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== Address 0x13109c30 is 0 bytes inside a block of size 4,096 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x1155F814: llvm::DenseMap<void const*, llvm::PassInfo const*, llvm::DenseMapInfo<void const*, void>, llvm::detail::DenseMapPair<void const*, llvm::PassInfo const*> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x1155C5F6: llvm::PassRegistry::registerPass(llvm::PassInfo const&, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x119FA87B: initializeAArch64BranchTargetsPassOnce(llvm::PassRegistry&) [clone .llvm.4105488490841716021] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xB3F7B0A: __pthread_once_slow.isra.0 (in /usr/lib64/libc.so.6)
==563011== by 0xB3F7B78: pthread_once@@GLIBC_2.34 (in /usr/lib64/libc.so.6)
==563011== by 0x11B36414: LLVMInitializeAArch64Target (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0xA04718A: rustc_llvm::initialize_available_targets (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0xA046A74: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::init (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BB45: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched free() / delete / delete []
==563011== at 0x4844B83: free (vg_replace_malloc.c:989)
==563011== by 0x9F7CFB5: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== by 0xB3F2796: start_thread (in /usr/lib64/libc.so.6)
==563011== Address 0x1313b060 is 0 bytes inside a block of size 31 alloc'd
==563011== at 0x4841FEC: operator new(unsigned long) (vg_replace_malloc.c:487)
==563011== by 0x114338E4: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::reserve(unsigned long) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x1143382C: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > llvm::detail::join_impl<llvm::StringRef*>(llvm::StringRef*, llvm::StringRef*, llvm::StringRef, std::forward_iterator_tag) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x1143372E: llvm::Triple::normalize[abi:cxx11](llvm::StringRef) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7CF7A: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched new/delete alignment alloc value: 4 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x11986EF1: llvm::X86_MC::initLLVMToSEHAndCVRegMapping(llvm::MCRegisterInfo*) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EB5: createX86MCRegisterInfo(llvm::Triple const&) [clone .llvm.4329284753645362384] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432658: llvm::LLVMTargetMachine::initAsmInfo() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11987DD3: llvm::RegisterTargetMachine<llvm::X86TargetMachine>::Allocator(llvm::Target const&, llvm::Triple const&, llvm::StringRef, llvm::StringRef, llvm::TargetOptions const&, std::optional<llvm::Reloc::Model>, std::optional<llvm::CodeModel::Model>, llvm::CodeGenOptLevel, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D5CD: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== Address 0x1313e2b0 is 0 bytes inside a block of size 512 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x11987193: llvm::DenseMap<llvm::MCRegister, int, llvm::DenseMapInfo<llvm::MCRegister, void>, llvm::detail::DenseMapPair<llvm::MCRegister, int> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EF1: llvm::X86_MC::initLLVMToSEHAndCVRegMapping(llvm::MCRegisterInfo*) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EB5: createX86MCRegisterInfo(llvm::Triple const&) [clone .llvm.4329284753645362384] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432658: llvm::LLVMTargetMachine::initAsmInfo() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11987DD3: llvm::RegisterTargetMachine<llvm::X86TargetMachine>::Allocator(llvm::Target const&, llvm::Triple const&, llvm::StringRef, llvm::StringRef, llvm::TargetOptions const&, std::optional<llvm::Reloc::Model>, std::optional<llvm::CodeModel::Model>, llvm::CodeGenOptLevel, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D5CD: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched new/delete alignment alloc value: 4 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x11986FEE: llvm::X86_MC::initLLVMToSEHAndCVRegMapping(llvm::MCRegisterInfo*) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EB5: createX86MCRegisterInfo(llvm::Triple const&) [clone .llvm.4329284753645362384] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432658: llvm::LLVMTargetMachine::initAsmInfo() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11987DD3: llvm::RegisterTargetMachine<llvm::X86TargetMachine>::Allocator(llvm::Target const&, llvm::Triple const&, llvm::StringRef, llvm::StringRef, llvm::TargetOptions const&, std::optional<llvm::Reloc::Model>, std::optional<llvm::CodeModel::Model>, llvm::CodeGenOptLevel, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D5CD: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== Address 0x131421f0 is 0 bytes inside a block of size 512 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x11987193: llvm::DenseMap<llvm::MCRegister, int, llvm::DenseMapInfo<llvm::MCRegister, void>, llvm::detail::DenseMapPair<llvm::MCRegister, int> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986FEE: llvm::X86_MC::initLLVMToSEHAndCVRegMapping(llvm::MCRegisterInfo*) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EB5: createX86MCRegisterInfo(llvm::Triple const&) [clone .llvm.4329284753645362384] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432658: llvm::LLVMTargetMachine::initAsmInfo() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11987DD3: llvm::RegisterTargetMachine<llvm::X86TargetMachine>::Allocator(llvm::Target const&, llvm::Triple const&, llvm::StringRef, llvm::StringRef, llvm::TargetOptions const&, std::optional<llvm::Reloc::Model>, std::optional<llvm::CodeModel::Model>, llvm::CodeGenOptLevel, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D5CD: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched free() / delete / delete []
==563011== at 0x4844B83: free (vg_replace_malloc.c:989)
==563011== by 0x9F7D5E8: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== by 0xB3F2796: start_thread (in /usr/lib64/libc.so.6)
==563011== Address 0x1313d350 is 0 bytes inside a block of size 25 alloc'd
==563011== at 0x4841FEC: operator new(unsigned long) (vg_replace_malloc.c:487)
==563011== by 0x112A999D: llvm::Twine::str[abi:cxx11]() const (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432C63: llvm::Triple::Triple(llvm::Twine const&) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D595: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched free() / delete / delete []
==563011== at 0x4844B83: free (vg_replace_malloc.c:989)
==563011== by 0x9F7D614: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== by 0xB3F2796: start_thread (in /usr/lib64/libc.so.6)
==563011== Address 0x1313b0c0 is 0 bytes inside a block of size 25 alloc'd
==563011== at 0x4841FEC: operator new(unsigned long) (vg_replace_malloc.c:487)
==563011== by 0x112A9A39: llvm::Twine::str[abi:cxx11]() const (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432C63: llvm::Triple::Triple(llvm::Twine const&) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7CFA1: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched new/delete alignment alloc value: 4 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x11980C89: llvm::TargetMachine::~TargetMachine() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11980DC8: llvm::X86TargetMachine::~X86TargetMachine() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D854: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== by 0xB3F2796: start_thread (in /usr/lib64/libc.so.6)
==563011== by 0xB476593: clone (in /usr/lib64/libc.so.6)
==563011== Address 0x131430b0 is 0 bytes inside a block of size 4,096 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x11987193: llvm::DenseMap<llvm::MCRegister, int, llvm::DenseMapInfo<llvm::MCRegister, void>, llvm::detail::DenseMapPair<llvm::MCRegister, int> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986FEE: llvm::X86_MC::initLLVMToSEHAndCVRegMapping(llvm::MCRegisterInfo*) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EB5: createX86MCRegisterInfo(llvm::Triple const&) [clone .llvm.4329284753645362384] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432658: llvm::LLVMTargetMachine::initAsmInfo() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11987DD3: llvm::RegisterTargetMachine<llvm::X86TargetMachine>::Allocator(llvm::Target const&, llvm::Triple const&, llvm::StringRef, llvm::StringRef, llvm::TargetOptions const&, std::optional<llvm::Reloc::Model>, std::optional<llvm::CodeModel::Model>, llvm::CodeGenOptLevel, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D5CD: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011== Mismatched new/delete alignment alloc value: 4 dealloc value: default-aligned
==563011== at 0x484565F: operator delete(void*) (vg_replace_malloc.c:1131)
==563011== by 0x11980C9A: llvm::TargetMachine::~TargetMachine() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11980DC8: llvm::X86TargetMachine::~X86TargetMachine() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D854: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F2BBC4: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E7395A: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>> (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9E73729: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x49235FA: call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (boxed.rs:2070)
==563011== by 0x49235FA: std::sys::pal::unix::thread::Thread::new::thread_start (thread.rs:108)
==563011== by 0xB3F2796: start_thread (in /usr/lib64/libc.so.6)
==563011== by 0xB476593: clone (in /usr/lib64/libc.so.6)
==563011== Address 0x131401b0 is 0 bytes inside a block of size 8,192 alloc'd
==563011== at 0x4842722: operator new(unsigned long, std::align_val_t) (vg_replace_malloc.c:547)
==563011== by 0x11987193: llvm::DenseMap<llvm::MCRegister, int, llvm::DenseMapInfo<llvm::MCRegister, void>, llvm::detail::DenseMapPair<llvm::MCRegister, int> >::grow(unsigned int) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EF1: llvm::X86_MC::initLLVMToSEHAndCVRegMapping(llvm::MCRegisterInfo*) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11986EB5: createX86MCRegisterInfo(llvm::Triple const&) [clone .llvm.4329284753645362384] (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11432658: llvm::LLVMTargetMachine::initAsmInfo() (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x11987DD3: llvm::RegisterTargetMachine<llvm::X86TargetMachine>::Allocator(llvm::Target const&, llvm::Triple const&, llvm::StringRef, llvm::StringRef, llvm::TargetOptions const&, std::optional<llvm::Reloc::Model>, std::optional<llvm::CodeModel::Model>, llvm::CodeGenOptLevel, bool) (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libLLVM.so.18.1-rust-1.81.0-stable)
==563011== by 0x9F7D5CD: LLVMRustCreateTargetMachine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7CBD6: rustc_codegen_llvm::back::write::target_machine_factory::{closure#0} (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7B578: rustc_codegen_llvm::back::write::create_informational_target_machine (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D764: rustc_codegen_llvm::llvm_util::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F7D71E: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::target_features (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011== by 0x9F52D6F: rustc_interface::util::add_configuration (in /home/imperio/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-3f4ebb066deec3c0.so)
==563011==
==563011==
==563011== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==563011== Bad permissions for mapped region at address 0x4990F68
==563011== at 0x4007DE8: _dl_map_object (dl-load.c:1903)
==563011==
==563011== HEAP SUMMARY:
==563011== in use at exit: 23,891,557 bytes in 104,588 blocks
==563011== total heap usage: 486,459 allocs, 381,871 frees, 72,591,238 bytes allocated
==563011==
==563011== LEAK SUMMARY:
==563011== definitely lost: 67,584 bytes in 1 blocks
==563011== indirectly lost: 0 bytes in 0 blocks
==563011== possibly lost: 2,673,093 bytes in 4,808 blocks
==563011== still reachable: 21,150,384 bytes in 99,777 blocks
==563011== suppressed: 496 bytes in 2 blocks
==563011== Rerun with --leak-check=full to see details of leaked memory
==563011==
==563011== For lists of detected and suppressed errors, rerun with: -s
==563011== ERROR SUMMARY: 27 errors from 10 contexts (suppressed: 0 from 0)
Segmentation fault (core dumped)
```
I tried with `rustc 1.81.0 (eeb90cda1 2024-09-04)` and with `rustc 1.83.0-nightly (3ae715c8c 2024-10-07)`. The bug occurs in both cases. | I-crash,T-compiler,C-bug | low | Critical |
2,661,860,504 | react-native | TouchableOpacity isn't working on IOS device | ### Description
After updating to the latest version, SDK52, my app has stopped working on iOS devices, although it was previously working fine on both iOS and Android. Specifically, the TouchableOpacity button and card components are not functioning correctly on iOS.
### Steps to reproduce
.
### React Native Version
0.76.1
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
.
```
### Stacktrace or Logs
```text
.
```
### Reproducer
https://github.com/halilyildiz384589
### Screenshots and Videos
. | Platform: iOS,Component: TouchableOpacity,Needs: Repro,Newer Patch Available,Needs: Attention | medium | Major |
2,661,872,475 | react-native | ScrollView scrollbar has spacing when parent has paddings | ### Description
When adding padding for parent of `ScrollView` and/ or adding element before it, e.g. custom header, scroll bar has some padding on both top and bottom. This seems to be only happening with using only one of top or bottom safe area paddings.
This was initially discovered when i added custom padding for SafeAreaView so this can be also reproduced by wrapping `ScrollView` with `<SafeAreView edges=['top', 'left', 'right]>` or `<SafeAreView edges=['bottom', 'left', 'right]>` from [react-native-safe-area-context](https://www.npmjs.com/package/react-native-safe-area-context)
### Steps to reproduce
1. Create `View` with top spacing (either `margin` or `padding`)
2. Create `ScrollView` inside
3. Make `ScrollView` scrollable (e.g. by adding children)
### React Native Version
0.76.1
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0.1
CPU: (8) x64 Apple M1 Pro
Memory: 48.64 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 14.21.3
path: ~/.nvm/versions/node/v14.21.3/bin/node
Yarn:
version: 3.6.4
path: /opt/homebrew/bin/yarn
npm:
version: 6.14.18
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.11.04.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 21.0.1
path: /opt/homebrew/opt/openjdk/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
-
```
### Reproducer
https://snack.expo.dev/@denissdubinin/scrollview-with-parent-spacing?platform=ios
### Screenshots and Videos
<img width="372" alt="Screenshot 2024-11-15 at 2 44 27 PM" src="https://github.com/user-attachments/assets/842d308b-123d-4dcc-8d96-ddd72db55444">
### Component
```
import { Text, View, ScrollView } from 'react-native';
const ITERATIONS = 50;
export default function App() {
return (
<View style={{marginTop: 59, backgroundColor: 'blue'}}>
<View style={{height: 100, backgroundColor: 'green'}}>
<Text>This is header</Text>
</View>
<ScrollView>
<View style={{backgroundColor: 'red'}}>
{ [...Array(ITERATIONS)].map(i => <Text key={ i }>Iteration { i }</Text>) }
</View>
</ScrollView>
</View>
);
}
```
| Component: ScrollView,Needs: Triage :mag:,Newer Patch Available | low | Major |
2,661,882,688 | pytorch | Cannot use mask and slice assignment together | ### 🐛 Describe the bug
I don't know whether it is a bug or just an unpleasant feature due to the underlying variable managing principles. Here's a minimal example for the bug:
```python
x = torch.zeros(2, 3, 4, 6)
mask = torch.tensor([[ True, True, False], [True, False, True]])
y = torch.rand(2, 3, 4, 3)
x[mask, :, :3] = y[mask]
```
In which I hope to
- In dimension 0 and 1, only the 4x6/1x3 slices in `x`/`y` that whose corresponding element in `mask` is `True` are allowed to be assigned.
- In dimension 3, only the first 3 elements in `x` are assigned with the 3-element tensor from `y`.
However I got the error below
```
RuntimeError: shape mismatch: value tensor of shape [4, 4, 3] cannot be broadcast to indexing result of shape [4, 3, 6]
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 专业版 (10.0.19045 64 位) # In English: Microsoft Windows 10 Pro (10.0.19045 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: N/A
Python version: 3.9.18 (main, Sep 11 2023, 14:09:26) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
----------------------
Name: Intel(R) Xeon(R) Platinum 8374B CPU @ 2.70GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2700
MaxClockSpeed: 2700
L2CacheSize: 48640
L2CacheSpeed: None
Revision: 27142
----------------------
Name: Intel(R) Xeon(R) Platinum 8374B CPU @ 2.70GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU1
CurrentClockSpeed: 2700
MaxClockSpeed: 2700
L2CacheSize: 48640
L2CacheSpeed: None
Revision: 27142
Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] torch==2.1.1
[pip3] torchaudio==2.1.1
[pip3] torchvision==0.16.1
[conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cudart-dev 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-libraries-dev 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvrtc-dev 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.101 0 nvidia
[conda] cuda-opencl-dev 12.3.101 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcublas-dev 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcufft-dev 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.101 0 nvidia
[conda] libcurand-dev 10.3.4.101 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusolver-dev 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libcusparse-dev 12.0.2.55 0 nvidia
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libnvjitlink-dev 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h6b88ed4_46357 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py39h2bbff1b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.8 py39h2bbff1b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.4 py39h59b6b97_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy 1.26.0 py39h055cbcc_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy-base 1.26.0 py39h65a83cf_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] pytorch 2.1.1 py3.9_cuda12.1_cudnn8_0 pytorch
[conda] pytorch-cuda 12.1 hde6ce7c_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.1.1 pypi_0 pypi
[conda] torchvision 0.16.1 pypi_0 pypi
``` | triaged,module: advanced indexing | low | Critical |
2,661,991,071 | go | os: File sporadic waits upon non-blocking raw connect | ### Go version
go version go1.23.3 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/george/.cache/go-build'
GOENV='/home/george/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/george/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/george/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/go/go-1.23.3'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/go/go-1.23.3/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/george/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build215866673=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I'm working with `os.File` for raw non-blocking socket communication.
Not often I experience connect hangs from my client sockets.
I made a small example with a client using `os.File` to wrap a non-blocking TCP socket
The example can be fetched here: [https://github.com/georgeyanev/go-raw-osfile-connect](https://github.com/georgeyanev/go-raw-osfile-connect)
This client connects to the remote side (the server) in a loop and writes a message upon successful connection. Then it closes the connection.
For connecting I use modified code from `netFD.connect` in go's net package.
The original `connect` code calls `fd.pd.waitWrite` diectly, and I can not do that because I have no access to the poll descriptor. In the provided example, in order to achieve calling of `fd.pd.waitWrite`, I use `rawConn.Write` passing it a dummy function.
The difference with the original code is that here, before calling `fd.pd.waitWrite`, `rawConn.Write` calls `fd.writeLock()` and `fd.pd.prepareWrite()`. I wonder if calling these two functions could cause the problem. And if so then there is no reliable way to call `fd.pd.waitWrite` upon connect.
Actually I can run a few hundred even a few thousand successful connects before hanging. Thats why there is a 100_000 times loop.
When using standard tcp client code (net.Dial, net.Conn etc.) there is no such an issue.
Is this behaviour expected or it is an issue that should be fixed?
This issue is tested on:
- Linux 6.10.11-linuxkit #1 SMP aarch64 GNU/Linux
- Linux 6.8.0-47-generic #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC x86_64 GNU/Linux
using go versions: 1.22.6 and 1.23.3
### What did you see happen?
I saw connect hanging after a few hundred or a few thousand requests.
In the following `sctrace -fTtt` of the client output I see the `epoll` event
for writing (`EPOLLOUT`) is received from a PID different than the PID called `connect` and
then a new `epoll_pwait` function is called from the PID called `connect` this time waiting forever:
```
[pid 17322] 12:29:04.771233 socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_TCP) = 3 <0.000042>
[pid 17322] 12:29:04.771447 connect(3, {sa_family=AF_INET, sin_port=htons(33334), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000048>
[pid 17322] 12:29:04.771665 fcntl(3, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK) <0.000067>
[pid 17322] 12:29:04.771799 epoll_ctl(4, EPOLL_CTL_ADD, 3, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=2144862493, u64=18446583856394404125}} <unfinished ...>
[pid 17323] 12:29:04.771844 <... nanosleep resumed>NULL) = 0 <0.010640>
[pid 17322] 12:29:04.771864 <... epoll_ctl resumed>) = 0 <0.000050>
[pid 17323] 12:29:04.771880 epoll_pwait(4, <unfinished ...>
[pid 17323] 12:29:04.771939 <... epoll_pwait resumed>[{events=EPOLLOUT, data={u32=2144862493, u64=18446583856394404125}}], 128, 0, NULL, 0) = 1 <0.000051>
[pid 17323] 12:29:04.772010 nanosleep({tv_sec=0, tv_nsec=10000000}, <unfinished ...>
[pid 17322] 12:29:04.772159 epoll_pwait(4, [], 128, 0, NULL, 0) = 0 <0.000031>
[pid 17322] 12:29:04.772245 epoll_pwait(4, <unfinished ...>
```
And the program hangs from now on.
### What did you expect to see?
I expect all 100_000 connect and write cycles to pass successfully.
I expect to be able to use use the non-blocking connect with os.File reliably.
Please suggest if there is some other proper way for doing this | NeedsInvestigation | low | Critical |
2,662,064,596 | godot | Cannot use keywords inside dictionary with Lua-style syntax | ### Tested versions
Godot v4.4.dev4
### System information
Godot v4.4.dev4 - macOS 15.1.0 - Multi-window, 2 monitors - Metal (Mobile) - integrated Apple M1 Max (Apple7) - Apple M1 Max (10 threads)
### Issue description
Writing this
```gdscript
{class="xyz"}
```
Results in this parse error:
```
Expected expression as dictionary key.
Expected ":" or "=" after dictionary key.
Expected ":" after dictionary key.
Expected expression as dictionary value.
Expected closing "}" after dictionary elements.
Expected end of statement after expression, found "class" instead.
Expected statement, found "class" instead.
```
[Docs](https://docs.godotengine.org/en/stable/classes/class_dictionary.html#dictionary) say:
```gdscript
# Alternative Lua-style syntax.
# Doesn't require quotes around keys, but only string constants can be used as key names.
# Additionally, key names must start with a letter or an underscore.
# Here, `some_key` is a string literal, not a variable!
another_dict = {
some_key = 42,
}
```
In this case, I think class (and others keywords) should not be considered as keyword, but a string literal.
I know I could write `{"class"="xyz"}` or `{"class": "xyz"}`, but I think it breaks the developer experience.
For reference, Ruby allows using keywords as Hash key (Ruby's equivalent to Dictionary), because in this case it handles the keys as Symbol (similar Godot's StringName).
<img width="296" alt="Screenshot 2024-11-15 at 10 53 31" src="https://github.com/user-attachments/assets/cec9f558-69d9-465d-987b-9ddc9826a5b2">
### Steps to reproduce
Write `{class="xyz"}` in a script file.
### Minimal reproduction project (MRP)
N/A | topic:gdscript,documentation | low | Critical |
2,662,078,429 | kubernetes | LimitRange and ResourceQuota Accept Values Without Units, Leading to Pod Scheduling and Runtime Failures | ### What happened?
When creating a LimitRange or ResourceQuota without specifying units for memory and storage (e.g., "2" instead of "2Gi"), Kubernetes accepts the resource creation. However, pods fail to be scheduled, resulting in the following error:
`Error creating: pods "...": [maximum memory usage per Pod is 2, but limit is 1Gi, maximum memory usage per Container is 2, but limit is 1Gi]`
Upon adjusting pod resource requests and limits to omit units (e.g., memory: 1), pods schedule but remain stuck in the ContainerCreating phase with containerd errors:
`Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed...`
### What did you expect to happen?
1. Kubernetes should reject LimitRange or ResourceQuota configurations without specified units for memory and storage.
2. Alternatively, Kubernetes should normalize the values to a default unit to ensure compatibility.
### How can we reproduce it (as minimally and precisely as possible)?
1. Create a LimitRange without units for memory:
yaml
`apiVersion: v1
kind: LimitRange
metadata:
name: example-limitrange
namespace: test-namespace
spec:
limits:
- type: Pod
max:
memory: "2"
min:
memory: "500m"
- type: Container
max:
memory: "2"
min:
memory: "250m"
`
2. Create a ResourceQuota without units:
`
apiVersion: v1
kind: ResourceQuota
metadata:
name: example-resourcequota
namespace: test-namespace
spec:
hard:
limits.memory: "4"
requests.memory: "4"
`
4. Deploy a pod with the following resource configuration:
`resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 500m
memory: 1024Mi
`
5.Observe pod creation failure with FailedCreate event.
6.Update the pod memory to omit units:
`resources:
requests:
cpu: 500m
memory: 1
limits:
cpu: 500m
memory: 1
`
7. Observe pod stuck in ContainerCreating state with containerd errors.
### Anything else we need to know?
This issue occurs consistently across multiple clusters. ( EKs , k3s ...)
Errors in the ContainerCreating phase point to systemd and containerd issues when using unnormalized values
### Kubernetes version
<details>
```console
$ kubectl version
# Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.6-eks-7f9249a
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Critical |
2,662,136,183 | godot | [4.3] Crash when exporting in headless mode on linux with godot 4.3 | ### Tested versions
Tested this on 4.3 stable
### System information
Windows 11
### Issue description
When exporting in headless mode via a github action I get a crash:
```
savepack: end
Unloading addon: res://addons/Todo_Manager/plugin.cfg
Unloading addon: res://addons/gdUnit4/plugin.cfg
Unloading addon: res://addons/godot-playfab/plugin.cfg
Unloading addon: res://addons/godot-sqlite/plugin.cfg
Unloading addon: res://addons/gut/plugin.cfg
Unloading addon: res://addons/runtime_debug_tools/plugin.cfg
Unloading addon: res://addons/script-ide/plugin.cfg
ERROR: Parameter "m" is null.
Unloading addon: res://addons/Todo_Manager/plugin.cfg
at: mesh_get_surface_count (servers/rendering/dummy/storage/mesh_storage.h:120)
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
ERROR: FATAL: Index p_index = 1 is out of bounds (size() = 0).
at: get (./core/templates/cowdata.h:205)
Illegal instruction (core dumped)
```
The export exe is still created in this instance however the process exits with code 132 (SIGILL) and this causes the rest of the CI pipeline to fail. I was doing this as part of a CI pipeline that exports the project on push to master.
Here are the commands used for the CI export
`GODOT_VERSION=4.3`
`EXPORT_DIR=build`
`EXPORT_NAME=test-project`
```
- name: Setup Godot
run: |
mkdir -v -p ~/.local/share/godot/export_templates/
mkdir -v -p ~/.config/
mv /root/.config/godot ~/.config/godot
mv /root/.local/share/godot/export_templates/${GODOT_VERSION}.stable ~/.local/share/godot/export_templates/${GODOT_VERSION}.stable
- name: Windows Export
run: |
mkdir -v -p build/windows
EXPORT_DIR="$(readlink -f build)"
cd $PROJECT_PATH
godot --headless --verbose --import --export-release "windows" "$EXPORT_DIR/windows/$EXPORT_NAME.exe"
```
### Steps to reproduce
This looks like a reoccurrence of https://github.com/godotengine/godot/issues/89674
### Minimal reproduction project (MRP)
N/A | bug,needs testing,crash,topic:export | low | Critical |
2,662,137,178 | react | [Compiler Bug]: null reference exception if you assume a value is not null in a callback (which can be valid) | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAMygOzgFwJYSYAEAKgmDgBTBEC2AhgJ4BGCAclADacA0RAvgEoiwADrEicQhUnMiAXiJQwCAML1uzenADWlSsPkA+EeKLmiaTNSIAPZHSasO3AHS2BggNxmBfANoMLOxcvAC63uK+MAg4sMRBzqFEAPxEADxgAA70xAD0Jg7pACZ4AG5EhKqceLrywHDM-EQFPpj8IPxAA
### Repro steps
Similiar to https://github.com/facebook/react/issues/31550 but with useCallback and no external deps.
You seem to assume that accessing `x.y` is safe in the render method if it is safe in a callback.
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
19.0.0-beta-a7bf2bd-20241110 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,662,144,111 | rust | compiletest: Locally running clang-based tests assumes a hard-coded path to clang | When trying to run clang-based run-make tests locally, it doesn't seem to be possible to easily change or specify the path to a custom clang. For example, if I run:
```
RUSTBUILD_FORCE_CLANG_BASED_TESTS=1 ./x test tests/run-make/pointer-auth-link-with-c-lto-clang
```
Then the rmake build will be invoked with
```
CLANG="~/rust/rust/build/aarch64-unknown-linux-gnu/llvm/bin/clang"
LLVM_BIN_DIR="~/llvm/rust-llvm/build/bin"
```
LLVM_BIN_DIR is set from config.toml and that's correct. CLANG, on the other hand, is hard-coded by bootstrap to be overwritten to where the CI puts it. Setting the envvar manually does not work and the test keeps crashing unless I symlink the clang binary to where it points it to or modify the bootstrap source.
Compiletest has an option to specify the clang path, so the test _could_ be invoked as follows:
```
RUSTBUILD_FORCE_CLANG_BASED_TESTS=1 ./x test tests/run-make/pointer-auth-link-with-c-lto-clang -- --run-clang-based-tests-with ~/llvm/rust-llvm/build/bin/clang
```
But bootstrap already passes that flag to it so no luck, still complains:
```
Testing stage1 compiletest suite=run-make mode=run-make (aarch64-unknown-linux-gnu)
thread 'main' panicked at src/tools/compiletest/src/lib.rs:198:19:
OptionDuplicated("run-clang-based-tests-with")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:00:01
```
I think the easiest solution here would be to just make it so that compiletest lets you override this option by just taking whichever one comes last. A cleaner solution would be removing the assumption on the path to the clang binary and making it configurable.
cc @jieyouxu | T-bootstrap,C-bug,A-compiletest | low | Critical |
2,662,148,918 | vscode | vscode window is large then it seems. transparent unclickable area around all windows. |
Type: <b>Bug</b>
Unable to click anything under window, transparent area of aproximately 50px, all around. Since there is a shadow around, every vscode window, issue reporter included, has this issue. I'm guessing it has something to do with the shadow.
Basically the window is larger then it seems and clicking at the edge of vscode, i'm unable to switch to another app that is underneat it. Representated by white rectangular, aproximation of area around the window.

VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Linux x64 6.11.7-300.fc41.x86_64
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 5800X3D 8-Core Processor (16 x 3553)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 0, 0|
|Memory (System)|62.71GB (55.33GB free)|
|Process Argv|/home/tomi/Documents --crash-reporter-id ac1fc1ed-1df5-4e41-a000-f6055199a477|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|gnome|
|XDG_CURRENT_DESKTOP|GNOME|
|XDG_SESSION_DESKTOP|gnome|
|XDG_SESSION_TYPE|wayland|
</details><details><summary>Extensions (5)</summary>
Extension|Author (truncated)|Version
---|---|---
xml|Dot|2.5.1
vscode-docker|ms-|1.29.3
default-keys-windows|smc|0.0.10
php-debug|xde|1.35.0
php-intellisense|zob|1.3.3
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
vscrpc:30673769
962ge761:30959799
9b8hh234:30694863
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
impr_priority:31102340
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31181875
```
</details>
<!-- generated by issue reporter -->
| bug,ux,linux,workbench-os-integration | low | Critical |
2,662,214,339 | pytorch | [AOTI] Test case `test_aot_inductor.py::test_multi_device_cpu` is Running GPU AOT Inductor in CPU container runner. | ### 🐛 Describe the bug
The test case [`test_multi_device_cpu`](https://github.com/pytorch/pytorch/blob/1b95ca904f5020ad8649677cbef683fac9d8e768/test/inductor/test_aot_inductor.py#L304C1-L314C50) actually runs in the `AOTIModelContainerRunnerCpu`, but in the `torch.compile` see it's is GPU, here is "cuda". So the generated AOT Inductor Cpp code all defined USE_CUDA=1, but the container runner is CPU, which has the stream as `nullptr`.
This case happed to pass in CUDA machine because CUDA API will recognize the `nullptr` as current stream.
But for XPU which I'm implemented, the SYCL API get crashed with `nullptr` stream. So I think we need to refine this test case.
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git8a80cee
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.35
Python version: 3.9.20 | packaged by conda-forge | (main, Sep 30 2024, 17:49:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @ezyang | oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,662,245,100 | PowerToys | Color picker: add color viewer mode | ### Description of the new feature / enhancement
Hi,
In color viewer mode, all color formats would be editable and a additional panel on the right would show a preview of the input color.
### Scenario when this would be used?
I often need to visualize color whose format is not common and there are not many tool to do it. | Needs-Triage,Needs-Team-Response | low | Minor |
2,662,257,726 | rust | `--nocapture` doesn't follow common CLI conventions, making it a stumbling block to people debugging failures | By convention, users would expect to type in `--no-capture`. The fact that the argument is `--nocapture` trips people up, especially as they have to wait for their test to compile before they see the failure. Without spelling suggestions, they need to then consult the help to then remember its without the middle `-`. Unless someone is doing this all the time to build up muscle memory to counteract intuition, this will trip people up each time.
See also
- #24451
- https://hachyderm.io/@wezm@mastodon.decentralised.social/113485301109871075
- https://rust-lang.zulipchat.com/#narrow/channel/404371-t-testing-devex/topic/--nocapture/near/482538793 | A-libtest,C-bug,disposition-merge,finished-final-comment-period,T-testing-devex | medium | Critical |
2,662,258,480 | stable-diffusion-webui | RuntimeError: Could not infer dtype of NoneType | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
The error occurs when I move the slide to close the lips in the Live Portrait extension, before I didn't get that error, but now I do.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-605-g05b01da0
Commit hash: 05b01da01f76c97011358a415820360546d284fb
Installing sd-webui-live-portrait requirement: changing imageio-ffmpeg version from None to 0.5.1
Installing sd-webui-live-portrait requirement: pykalman
Installing sd-webui-live-portrait requirement: onnxruntime-gpu==1.18 --extra-index-url "https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/"
Found existing installation: onnxruntime-gpu 1.18.0
Uninstalling onnxruntime-gpu-1.18.0:
Successfully uninstalled onnxruntime-gpu-1.18.0
Looking in indexes: https://pypi.org/simple, https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
Collecting onnxruntime-gpu==1.17.1
Downloading https://aiinfra.pkgs.visualstudio.com/2692857e-05ef-43b4-ba9c-ccf1c22c437c/_packaging/9387c3aa-d9ad-4513-968c-383f6f7f53b8/pypi/download/onnxruntime-gpu/1.17.1/onnxruntime_gpu-1.17.1-cp310-cp310-win_amd64.whl (149.1 MB)
-------------------------------------- 149.1/149.1 MB 2.4 MB/s eta 0:00:00
Requirement already satisfied: coloredlogs in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (15.0.1)
Requirement already satisfied: flatbuffers in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (24.3.25)
Requirement already satisfied: numpy>=1.21.6 in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (1.26.2)
Requirement already satisfied: packaging in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (23.2)
Requirement already satisfied: protobuf in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (3.20.0)
Requirement already satisfied: sympy in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (1.12)
Requirement already satisfied: humanfriendly>=9.1 in c:\ia(forge)\system\python\lib\site-packages (from coloredlogs->onnxruntime-gpu==1.17.1) (10.0)
Requirement already satisfied: mpmath>=0.19 in c:\ia(forge)\system\python\lib\site-packages (from sympy->onnxruntime-gpu==1.17.1) (1.3.0)
Requirement already satisfied: pyreadline3 in c:\ia(forge)\system\python\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-gpu==1.17.1) (3.4.1)
Installing collected packages: onnxruntime-gpu
Successfully installed onnxruntime-gpu-1.17.1
CUDA 12.1
+---------------------------------+
--- PLEASE, RESTART the Server! ---
+---------------------------------+
Launching Web UI with arguments: --xformers --skip-torch-cuda-test --precision full --no-half --no-half-vae
Total VRAM 6144 MB, total RAM 15834 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : native
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
C:\ia(forge)\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
Using xformers cross attention
Using xformers attention for VAE
ControlNet preprocessor location: C:\ia(forge)\webui\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.9.0, num models: 10
10:01:00 - ReActor - STATUS - Running v0.7.1-b2 on Device: CUDA
Loading additional modules ... done.
2024-11-15 10:01:24,738 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\ia(forge)\webui\models\Stable-diffusion\realisticVisionV60B1_v51VAE-inpainting.safetensors', 'hash': 'b7aa5c67'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860/
To create a public link, set share=True in launch().
Startup time: 217.9s (prepare environment: 165.2s, launcher: 0.7s, import torch: 12.4s, initialize shared: 0.3s, other imports: 0.6s, load scripts: 5.7s, initialize google blockly: 21.9s, create ui: 7.5s, gradio launch: 3.3s, app_started_callback: 0.1s).
Environment vars changed: {'stream': False, 'inference_memory': 4687.0, 'pin_shared_memory': False}
[GPU Setting] You will use 23.70% GPU memory (1456.00 MB) to load weights, and use 76.30% GPU memory (4687.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 4687.0, 'pin_shared_memory': False}
[GPU Setting] You will use 23.70% GPU memory (1456.00 MB) to load weights, and use 76.30% GPU memory (4687.00 MB) to do matrix computation.
[10:01:52] Load appearance_feature_extractor from live_portrait_wrapper.py:46
C:\ia(forge)\webui\models\liveportrait\base_models\appearance_feature_extractor.s
afetensors done.
Load motion_extractor from live_portrait_wrapper.py:49
C:\ia(forge)\webui\models\liveportrait\base_models\motion_extractor.safetensors
done.
[10:01:53] Load warping_module from live_portrait_wrapper.py:52
C:\ia(forge)\webui\models\liveportrait\base_models\warping_module.safetensors
done.
[10:01:54] Load spade_generator from live_portrait_wrapper.py:55
C:\ia(forge)\webui\models\liveportrait\base_models\spade_generator.safetensors
done.
Load stitching_retargeting_module from live_portrait_wrapper.py:59
C:\ia(forge)\webui\models\liveportrait\retargeting_models\stitching_retargeting_m
odule.safetensors done.
Using InsightFace cropper live_portrait_pipeline.py:47
[10:01:58] FaceAnalysisDIY warmup time: 2.770s face_analysis_diy.py:79
[10:02:00] LandmarkRunner warmup time: 1.117s human_landmark_runner.py:95
Load source image from C:\Users\Usuario\AppData\Local\Temp\gradio\tmpzmbcg7mo.png. gradio_pipeline.py:421
[10:02:04] Calculating eyes-open and lip-open ratios successfully! gradio_pipeline.py:432
Traceback (most recent call last):
File "C:\ia(forge)\system\python\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api
output = await app.get_blocks().process_api(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\blocks.py", line 1508, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\ia(forge)\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\ia(forge)\system\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\ia(forge)\system\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\ia(forge)\system\python\lib\site-packages\gradio\utils.py", line 818, in wrapper
response = f(*args, **kwargs)
File "C:\ia(forge)\webui\extensions\sd-webui-live-portrait\scripts\main.py", line 183, in gpu_wrapped_execute_image_retargeting
out, out_to_ori_blend = pipeline.execute_image_retargeting(*args, **kwargs)
File "C:\ia(forge)\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ia(forge)\webui\extensions\sd-webui-live-portrait\liveportrait\gradio_pipeline.py", line 310, in execute_image_retargeting
lip_variation_three = torch.tensor(lip_variation_three).to(device)
RuntimeError: Could not infer dtype of NoneType
### Steps to reproduce the problem
.
### What should have happened?
.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
.
### Console logs
```Shell
.
```
### Additional information
_No response_ | bug-report | low | Critical |
2,662,294,169 | node | [v18.20.5 / next v20.x LTS version] NodeJS should provide some kind of warning when using import assertions as they are non-standard | # Clarification
v18.20.4 (previous v18.x release) and v20.18.0 have a warning for both import assertions and import attributes
v18.20.5 removes this warning for both cases, I believe the import assertion warning should still be around
Seems to be caused by https://github.com/nodejs/node/pull/55333
### Version
v18.20.5
### Platform
All
### Subsystem
_No response_
### What steps will reproduce the bug?
```sh
node --input-type=module -e 'import "data:application/json,{}" assert { type: "json" }'
```
### How often does it reproduce? Is there a required condition?
Every time
### What is the expected behavior? Why is that the expected behavior?
Regarding [What steps will reproduce the bug?]
It should provide some kind of warning since import assertions (not import attributes) are non-standard
### Additional information
This also applies to the next v20.x LTS version | v18.x,v20.x | low | Critical |
2,662,315,731 | react-native | [0.76] AccessibilityValue in View throws exception: "Exception in HostFunction: Loss of precision during arithmetic conversion: (long) " | ### Description
Similar to https://github.com/react-native-elements/react-native-elements/issues/3955 https://github.com/Sharcoux/slider/issues/102 https://github.com/callstack/react-native-paper/issues/4544, when any non-zero value is given to `AccessibilityValue.now` on Android & iOS, conversion occurs and an exception will be thrown.
(remark: Using `Math.round()` to avoid any (long) value will fix the issue temporarily.)
Both Expo (0.52) and bare React Native App (0.76.1 + SDK 34, 0.76.2 + SDK 35) can reproduce the bug. Expo 0.51 will not, so it is very likely related to New Architecture.
Same issue found in iOS 18 in some cases, but expo demo can show that
### Steps to reproduce
1. go to [demo](https://snack.expo.dev/OEofab5meHPozAXyOQg6_)
2. switch to Android/iOS and make sure expo == 0.52
3. click "Start Progress" to see the crash
### React Native Version
0.76.2
### Affected Platforms
Runtime - Android
Runtime - iOS
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
Binaries:
Node:
version: 23.1.0
path: /opt/homebrew/bin/node
Yarn:
version: 3.6.4
path: /opt/homebrew/bin/yarn
npm:
version: 10.9.0
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.11.04.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
Android:
```text
Exception in HostFunction: Loss of precision during arithmetic conversion: (long) 0.00001
```
iOS:
```text
Exception in HostFunction: Loss of precision during arithmetic conversion: (long long) 0.00001
```
### Reproducer
[https://snack.expo.dev/OEofab5meHPozAXyOQg6_](https://snack.expo.dev/OEofab5meHPozAXyOQg6_)
### Screenshots and Videos
_No response_ | Platform: Android,Needs: Triage :mag:,Newer Patch Available,Type: New Architecture | medium | Critical |
2,662,387,067 | vscode | Option to place "Start Debugging" control in Command Center | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
I love having the Debug Toolbar within the Command Center. Since it only appears while debugging, it would be helpful to have the "Start Debugging" control appear in its place when not debugging. This would eliminate the need to switch to the Run and Debug pane, making the workflow more convenient.
Here's a visual for what I'm suggesting:

Thank you! | feature-request,debug | low | Critical |
2,662,394,912 | PowerToys | A simplified functionality option for Awake | ### Description of the new feature / enhancement
An option where a single tap/left click on the Awake icon in the Taskbar could toggle it between "OFF" and "USER-SPECFIC CHOICE" modes.
### Scenario when this would be used?
I am a heavy user of Awake but rather than assigning it specific time to stay awake, I always go for "Indefinite" because I don't know how each session may turn out. The existing options are probably more useful to other users, but it would be great if each user could customize the toggle functionality to their preference, i.e. single tap/left click toggle between "OFF" and "USER-SPECFIC CHOICE". In my case, the "user-specific choice" would be "INDEFINITE", but it can be a "SPECIFIC TIME" for other users.
### Supporting information
_No response_ | Idea-Enhancement,Product-Awake | low | Minor |
2,662,404,703 | pytorch | [XPU] test_xpu.py::TestXpuXPU::test_generic_stream_event_xpu crashed with XPU support package 2025.0 | ### 🐛 Describe the bug
```
___________________ TestXpuXPU.test_generic_stream_event_xpu ___________________
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_xpu.py", line 288, in test_generic_stream_event
self.assertGreater(event1.elapsed_time(event2), 0)
RuntimeError: Profiling information is unavailable as the queue associated with the event does not have the 'enable_profiling' property.
To execute this test, run the following from the base repo dir:
python test/test_xpu.py TestXpuXPU.test_generic_stream_event_xpu
```
Refer: https://github.com/pytorch/pytorch/actions/runs/11855073324/job/33041258828#step:14:2504
### Versions
Python main: ae7f809bfcadb426f8024a8abf3f33f0ecfd5308
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,662,407,752 | godot | Shader compilation errors in the output have comments removed, but empty lines kept instead | ### Tested versions
- Reproducible in `master` (76fa7b291455a8ba24c50005072ebdb58f8a5984)
- Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Fedora Linux 41 (KDE Plasma) - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7600M XT (RADV NAVI33) - AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 Threads)
### Issue description
When a Godot shader has an error, the whole shader is printed to the Output dock / stdout, with a mark for where the error is.
That's not great UX, but that's not what this issue is about. The problem is that in the process of compilation we seem to strip comments, but not remove the comment lines, so for example this shader:
```glsl
shader_type spatial;
void vertex() {
// Called for every vertex the material is visible on.
}
void fragment() {
xx
// Called for every pixel the material is visible on.
}
//void light() {
// Called for every pixel for every light affecting the material.
// Uncomment to replace the default light processing function with this one.
//}
```
Becomes this in the output:
```
res://test.gdshader:8 - Unknown identifier in expression: 'xx'.
Shader compilation failed.
--res://test.gdshader--
1 | shader_type spatial;
2 |
3 | void vertex() {
4 |
5 | }
6 |
7 | void fragment() {
E 8-> xx
9 |
10 | }
11 |
12 |
13 |
14 |
15 |
16 |
```
All those empty lines are where comments were present in the source, and IMO make things pretty confusing.
Ideally, I think it would be best if comments were still present in that output.
Alternatively, the lines where comments were present could be deleted, instead of just replaced by empty lines. One issue there is that the line number for where the error happens would differ in the log output and in the source shader.
### Steps to reproduce
- Add MeshInstance3D with a ShaderMaterial
- In the shader, add a compilation error somewhere (e.g. `xx` in the `vertex` function)
- See the output and compare with the source shader
### Minimal reproduction project (MRP)
[test-shader.zip](https://github.com/user-attachments/files/17777876/test-shader.zip) | enhancement,discussion,usability,topic:shaders | low | Critical |
2,662,418,468 | vscode | Make Command Center debug launcher offer most recently used configuration first | Type: <b>Feature Request</b>
If I most recently used the second of my debug configurations the debug status bar panel shows this:

But when I click on it the list that Command Center shows lacks a most-recently-used first entry, so it defaults to the first one:

VS Code version: Code - Insiders 1.96.0-insider (28f7008e9b2799e3004c48c26fff3d02ec8f13d8, 2024-11-15T05:04:10.294Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | feature-request,debug,good first issue | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.