id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,562,559,027 | langchain | Error using Langgraph + Langchain -- Unknown message type exception | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
# Imports
import operator
from typing import Annotated, TypedDict, Union, Sequence, List
from langchain.agents import create_tool_calling_agent
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
HumanMessagePromptTemplate
)
from langchain_openai import AzureChatOpenAI
from langchain_core.tools import tool
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage, AIMessage, SystemMessage
from langgraph.graph import START
from langgraph.prebuilt.tool_executor import ToolExecutor
from langgraph.graph import END, StateGraph
from langgraph.checkpoint.memory import MemorySaver
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return [
"It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
]
# define graph state
class AgentState(TypedDict):
input: str
chat_history: list[BaseMessage]
output: Union[AgentAction, AgentFinish, None]
intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
messages: Annotated[Sequence[BaseMessage], operator.add]
# Define nodes and conditional edges
# Define the function that determines whether to continue or not
def should_continue(state: AgentState):
if isinstance(state["output"], AgentFinish):
return "end"
else:
return "continue"
# define the function that executes the tool
def execute_tools(state):
intermediate_steps = []
for agent_action in state["output"]:
output = tool_executor.invoke(agent_action)
intermediate_steps.append((agent_action, str(output)))
return {"intermediate_steps": intermediate_steps}
# Define the function that calls the model
def call_model(state):
output = tool_calling_agent.invoke(state)
# We return a list, because this will get added to the existing list
return {"output": output}
# set up the tool
tools = [search]
tool_executor = ToolExecutor(tools)
# Set up the model
model = AzureChatOpenAI(
azure_endpoint="***",
openai_api_version="***",
deployment_name="GPT4omni",
model_name="GPT4omni",
openai_api_key="***",
openai_api_type="azure"
)
model = model.bind_tools(tools)
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(content="You are a weather reporter, and using the search tool you find the weather of the places"),
MessagesPlaceholder(variable_name='chat_history', optional=True),
HumanMessagePromptTemplate.from_template("{input}"),
MessagesPlaceholder(variable_name='agent_scratchpad')
]
)
tool_calling_agent = create_tool_calling_agent(model, tools, prompt)
# Define a new graph
workflow = StateGraph(AgentState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", execute_tools)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "action",
# Otherwise we finish.
"end": END,
},
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Set up memory
memory = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
# We add in `interrupt_before=["action"]`
# This will add a breakpoint before the `action` node is called
app = workflow.compile(checkpointer=memory, interrupt_before=["action"])
# execution
thread = {"configurable": {"thread_id": "3"}}
input = "search for the weather in sf now"
response = app.invoke(input = {"input": "search for the weather in sf now", "chat_history": []}, config = thread)
# The above will work and interrupt before the tool execution.
# Now to resume the execution
response = app.invoke(None, config=thread)
```
### Error Message and Stack Trace (if applicable)
```shell
File [~\AppData\Local\miniconda3\envs\test_langgraph_env\Lib\site-packages\langchain_openai\chat_models\base.py:232](http://localhost:8888/lab/tree/backup/langgraph/~/AppData/Local/miniconda3/envs/test_langgraph_env/Lib/site-packages/langchain_openai/chat_models/base.py#line=231), in _convert_message_to_dict(message)
230 message_dict = {k: v for k, v in message_dict.items() if k in supported_props}
231 else:
--> 232 raise TypeError(f"Got unknown type {message}")
233 return message_dict
TypeError: Got unknown type content='' additional_kwargs={'tool_calls': [{'id': 'call_0GzVAj4KksFGqiYKk0F5uvZO', 'function': {'arguments': '{"query":"current weather in San Francisco"}', 'name': 'search'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 68, 'total_tokens': 85}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_67802d9a6d', 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}} type='ai' id='run-78eb4727-e08d-40a1-9055-f18ae171162b-0' invalid_tool_calls=[] example=False tool_calls=[{'name': 'search', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_0GzVAj4KksFGqiYKk0F5uvZO', 'type': 'tool_call'}] usage_metadata={'input_tokens': 68, 'output_tokens': 17, 'total_tokens': 85}
```
### Description
I am using Langgraph + Human in the loop implementation.
I checked the state snapshot values and in that the "message_log" attribute is containing BaseMessage() type data, instead of what should be AIMessage()
After we try to resume the graph execution, it gives the below stacktrace (same as what is mentioned in the above question)
### System Info
Below are the versions
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.10 | packaged by conda-forge | (main, Sep 22 2024, 14:00:36) [MSC v.1941 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.11
> langsmith: 0.1.129
> langchain_aws: 0.1.16
> langchain_cohere: 0.2.2
> langchain_elasticsearch: 0.2.2
> langchain_experimental: 0.0.64
> langchain_google_community: 1.0.6
> langchain_google_vertexai: 1.0.6
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.21
> langchain_text_splitters: 0.2.2
> langgraph: 0.2.4
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.8
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> beautifulsoup4: 4.12.3
> boto3: 1.34.142
> cohere: 5.10.0
> dataclasses-json: 0.6.7
> db-dtypes: Installed. No version info available.
> elasticsearch[vectorstore-mmr]: Installed. No version info available.
> gapic-google-longrunning: Installed. No version info available.
> google-api-core: 2.20.0
> google-api-python-client: 2.131.0
> google-auth-httplib2: 0.2.0
> google-auth-oauthlib: Installed. No version info available.
> google-cloud-aiplatform: 1.56.0
> google-cloud-bigquery: 3.26.0
> google-cloud-bigquery-storage: Installed. No version info available.
> google-cloud-contentwarehouse: Installed. No version info available.
> google-cloud-discoveryengine: Installed. No version info available.
> google-cloud-documentai: Installed. No version info available.
> google-cloud-documentai-toolbox: Installed. No version info available.
> google-cloud-speech: Installed. No version info available.
> google-cloud-storage: 2.18.2
> google-cloud-texttospeech: Installed. No version info available.
> google-cloud-translate: Installed. No version info available.
> google-cloud-vision: Installed. No version info available.
> googlemaps: Installed. No version info available.
> grpcio: 1.63.0
> httpx: 0.27.2
> huggingface-hub: 0.24.5
> jsonpatch: 1.33
> langgraph-checkpoint: 1.0.14
> numpy: 1.26.4
> openai: 1.40.3
> orjson: 3.10.7
> packaging: 23.2
> pandas: 2.2.2
> pyarrow: Installed. No version info available.
> pydantic: 1.10.14
> PyYAML: 6.0.2
> requests: 2.32.3
> sentence-transformers: 3.1.1
> SQLAlchemy: 2.0.32
> tabulate: 0.9.0
> tenacity: 8.3.0
> tiktoken: 0.7.0
> tokenizers: 0.19.1
> transformers: 4.44.0
> typing-extensions: 4.12.2
``` | 🤖:bug | low | Critical |
2,562,583,476 | tauri | [feat] Force Bundle Identifier against rust's rules on iOS | ### Describe the problem
I use the bundle ID `ch.2221.billet`. This is an invalid bundle ID on Android, but it is valid on iOS. I use this bundle ID because I've used a different tech stack before that would allow it, and I own the domain 2221.ch. I currently get this error if I try to use it:
```
manaf@Cooper.local:~/Developer/billet.2221.ch (master*) $ pnpm tauri ios build 130
> billetterie-2221@1.0.0 tauri /Users/manaf/Developer/billet.2221.ch
> tauri "ios" "build"
thread '<unnamed>' panicked at crates/tauri-cli/src/mobile/mod.rs:421:6:
called `Result::unwrap()` on an `Err` value: IdentifierInvalid { identifier: "ch.2221.billet", cause: StartsWithDigit { label: "2221" } }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
ELIFECYCLE Command failed.
[1] 60141 abort pnpm tauri ios build
```
When I started working with Tauri V2 RC, I simply put `ch.u2221.billet` as the bundle ID, and searched and replaced that with `ch.2221.billet` in `project.pbxproj`. Now, since the latest Tauri V2 stable release, the bundle ID gets replaced by `ch.u2221.billet` at every build. I believe this is due to `synchronize_project_config` being called at every build https://github.com/tauri-apps/tauri/blob/983e7800b643e1d277cae55ea73f392fa99708ae/crates/tauri-cli/src/mobile/ios/build.rs#L205-L214
that sets the bundle identifier each time
https://github.com/tauri-apps/tauri/blob/983e7800b643e1d277cae55ea73f392fa99708ae/crates/tauri-cli/src/mobile/ios/mod.rs#L458-L462
### Describe the solution you'd like
I would like to have an option to force the bundle Identifier, or to prevent synchronizing the pbxproj file at every build / dev.
### Alternatives considered
Downgrade to the release candidate that still worked, or simply just not use Tauri. I cannot change bundle ids, because that means my already published app would have to get republished on the App Store, meaning the people that have it installed right now will have to delete the old one and download the new one from the store.
### Additional context
_No response_ | type: feature request | low | Critical |
2,562,620,955 | pytorch | Automagically lift assertions / shape checks into torch._check | A lot of user code looks like the following:
```
if a.shape[0] != 2:
raise RuntimeError("wrong shape")
```
if a is an unbacked symint, this throws a GuardOnDataDependent. Does it make sense to automagically turn this into a torch._check (and/or runtime assertion?)
cc @ezyang @chauhang @penguinwu @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,562,629,205 | pytorch | custom_op/triton_op autograd handling leads to off-by-one error message | Original reported by @aakhundov and @embg:
"in the error message [here](https://github.com/pytorch/pytorch/blob/main/torch/csrc/autograd/python_function.cpp#L177-L178) num_forward_inputs and num_outputs seem to be +1 from the reality [we are] seeing 5/4 in the error, where in reality [we] has 4/3). do we include self here? does this look like a bug? thanks!"
The problem is that [this](https://github.com/pytorch/pytorch/blob/4a9225fa1fe006d95fe7318cbbec27532b548683/torch/_library/autograd.py#L111) adds an additional argument to the autograd.Function, which the error message interprets as one additional argument. We should figure out some way to report a nicer error to the user.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @bdhirsh @yf225 | module: autograd,triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,562,634,030 | PowerToys | Incorrect format when pasting the same data multiple times | ### Microsoft PowerToys version
0.85.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
Incorrect format when pasting as JSON the same data multiple times
### ✔️ Expected Behavior
[
"Incorrect format when pasting the same data multiple times"
]
[
"Incorrect format when pasting the same data multiple times"
]
### ❌ Actual Behavior
[
"Incorrect format when pasting the same data multiple times"
]
[
"[",
" \"Incorrect format when pasting the same data multiple times\"",
"]"
]
### Other Software
_No response_ | Issue-Bug,Resolution-Fix Committed,Status-Reproducible,Product-Advanced Paste | low | Minor |
2,562,666,116 | pytorch | Less restrictive guards when using int(log2(sz)) | ### 🚀 The feature, motivation and pitch
```python
import torch
import torch._inductor.utils
from math import log2
def fn(x):
sz = x.size(0)
lg = log2(sz)
i_lg = int(lg)
return x + i_lg
with torch._inductor.utils.fresh_inductor_cache():
fn_c = torch.compile(fn, dynamic=True)
for i in range(8):
fn_c(torch.randn((17 + i,), device="cuda"))
```
`int(log2(x.size(0)))` will be `4` for all 8 inputs to this function, but it compiles 8 times and each time guards on the specific size of the input. It would be great if the guard ended up just being `16 <= s0 < 32` instead.
Note: today, the specialization happens in `log2()`, not in `int()`
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Minor |
2,562,682,382 | flutter | `impeller_frame` makes debugging gpu usage on ios difficult | At some point we changed the way we handle frames in impeller on Metal. This broke the "Graphics Overlay" (it just doesn't show up) and broke the gpu probes inside of xcode when executing an app. You can sometimes profile with the frame debugger to get a sense but the "impeller_frame" boundary doesn't always show up making it impossible to measure.
xcode's gpu probe is empty
<img width="1400" alt="Screenshot 2024-10-02 at 2 28 40 PM" src="https://github.com/user-attachments/assets/d8e82fa8-ded2-43b6-99b7-546cb848fb86">
`impeller_frame` option not available
<img width="381" alt="Screenshot 2024-10-02 at 2 33 21 PM" src="https://github.com/user-attachments/assets/7806a6a6-aee2-4f94-a5da-05833f9beea0">
seen on: xcode 15.4
My understanding is that this was related to a change in [`displaySyncEnabled`](https://developer.apple.com/documentation/quartzcore/cametallayer/2887087-displaysyncenabled). I think there is a plist argument to avoid it.
cc @jonahwilliams | P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,562,684,096 | pytorch | DISABLED test_memory_plots_free_segment_stack (__main__.TestCudaMallocAsync) | Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_memory_plots_free_segment_stack&suite=TestCudaMallocAsync&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/30986672699).
Over the past 3 hours, it has been determined flaky in 49 workflow(s) with 147 failures and 49 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_memory_plots_free_segment_stack`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_cuda.py", line 3569, in test_memory_plots_free_segment_stack
self.assertTrue(("empty_cache" in ss) == (context == "all"))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_cuda.py TestCudaMallocAsync.test_memory_plots_free_segment_stack
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_cuda.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 | module: rocm,triaged,module: flaky-tests,skipped | low | Critical |
2,562,699,829 | godot | Scaling a Node3D with Physics Interpolation active shows incorrect scale for 1 frame, when using Tween | ### Tested versions
- v4.4.dev.custom_build [4254946de]
### System information
Windows 10 - Vulkan
### Issue description
Using Tween to scale a Node3D when coming from a different visibility state will display the wrong scale for at least 1 frame.
For example: a MeshInstance3D is hidden at the start with a scale of (1,1,1), a tween is used to show the mesh (order doesn't change it) then scale it from (0,0,0) to (1,1,1) - for at least 1 frame it will display the original scale of (1,1,1).
Tween's `set_process_mode` doesn't change the behaviour. Physics Interpolation has to be ON for the bug to occur.
### Steps to reproduce
3D scene with a camera and a MeshInstance3D
```gdscript
extends Node3D
@onready var mesh: MeshInstance3D = $MeshInstance3D
func _ready() -> void:
mesh.hide()
var new_tween: Tween = create_tween()#.set_process_mode(Tween.TWEEN_PROCESS_PHYSICS) # Bug occurs in both process modes
new_tween.tween_interval(1.0) # Let scene be visible
# Example 1
new_tween.tween_callback(mesh.show)
new_tween.tween_property(mesh, "scale", Vector3.ONE, 2.0).from(Vector3.ZERO)
# Example 2 (comment 2 lines above and uncomment next line)
#new_tween.tween_callback(show_and_scale)
# Example 2
func show_and_scale() -> void:
mesh.show()
mesh.scale = Vector3.ZERO
var new_tween: Tween = create_tween()
new_tween.tween_property(mesh, "scale", Vector3.ONE, 2.0) # ".from_current()" also shows bug and ".from(mesh.scale)" also
```
### Minimal reproduction project (MRP)
[interp bug.zip](https://github.com/user-attachments/files/17236345/interp.bug.zip)
I include 2 examples in the MRP, simply comment and uncomment the lines as shown in the script.
Showcase:
<video src="https://github.com/user-attachments/assets/6723f6d1-746a-4398-b239-82956291837b"> | topic:core,documentation | low | Critical |
2,562,722,128 | pytorch | Upgrade AWS lambda functions from version 2.x to 3.x of the AWS SDK for JavaScript | v2 reached maintenance mode in Sept 2024 and will reach end of life Sept 2025
Context:
https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-javascript-v2/
Specifically, our autoscaler lambdas use are running on v2.x (and other lambdas may be using it as well) and need to be updated
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged | low | Minor |
2,562,812,592 | PowerToys | Dark bg color in tray | ### Description of the new feature / enhancement
Dark mode in general settings not working for tray app menu. I suggest improving this to work similarly to Phone Link app.
### Scenario when this would be used?
Launch tray app menu when dark mode is enable.
### Supporting information
_No response_ | Needs-Triage,Area-Flyout | low | Minor |
2,562,868,096 | ollama | Embedding discrepancies vs SentenceTransformers and between Ollama Versions | ### What is the issue?
I'm working on a RAG app, I'd like to simplify my stack by using ollama instead of sentence transformers but I'm observing some odd behavior from ollama embeddings. Here's what I'm seeing:
1. Output embeddings don't match sentence transformers
2. Hitting the "embd" endpoint from the api docs [here](https://github.com/ollama/ollama/blob/79d3b1e2bdfc97542a7259b0c839520d39578514/docs/api.md#generate-embeddings) returns different results than the "embeddings" endpoint described in the blog [here](https://ollama.com/blog/embedding-models)
3. Embeddings returned by different versions of ollama are different.
To test this, I embedded the same prompt with sentence transformers, ollama/embed, and ollama/embeddings. I printed out the first 5 elements in the output vector to inspect, and calculated a cosine similarity matrix between each pair of embedding methods (results in a symmetrical matrix with 1s along the diagonal). I repeated this using all-mini-lm and nomic-embed-text. Here's the code and outputs for ollama 0.3.3 and 0.3.12



The embeddings are pretty clearly different, both when compared to sentence transformers and between ollama 0.3.3 and 0.3.12.
The cosine similarity between ollama and sentence transformers is pretty close though. I also created a 3500 chunk vectorstore, once with ollama and once with sentence transformers. When I query them, I get the same chunks returned albeit in a different order. That makes me think every vector that is getting embedded is getting transformed somehow (when compared to sentence transformers), but its happening to all of them more or less equally.
Is this expected behavior? If so, is there anywhere I can read up on what ollama is doing under the hood?
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.3, 0.3.12 | bug | low | Minor |
2,562,890,879 | rust | Tracking issue for RFC 3678: Inherent items for traits | This is a tracking issue for RFC 3678: Inherent items for traits.
The feature gate for the issue is `#![feature(inherent_items_for_traits)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Approve as lang experiment.
- (Approved implicitly given the circumstances.)
- [ ] Accept an RFC.
- https://github.com/rust-lang/rfcs/pull/3678
- [ ] Implement in nightly.
- https://github.com/rust-lang/rust/pull/130802
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Related
TODO.
| T-lang,C-tracking-issue,B-experimental | low | Critical |
2,562,911,703 | rust | Rustdoc ignores `no_inline` when re-exporting modules from other crates | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
This one is going to be a bit difficult to explain, so, here's an example of a rather large workspace experiencing this bug: https://github.com/clarfonthey/bevy/tree/no-inline-bug
Essentially, if a crate `example` has a module including `no_inline` exports like so:
```rust
pub mod prelude {
#[doc(no_inline)]
pub use crate::{thing1, thing2};
}
```
Then rustdoc will correctly show these as re-exports in the documentation for that module, on that crate. The exports will be shown as `crate::thing1` and `crate::thing2`.
However, if you re-export that module on another crate, then you would expect these items to continue to show up as re-exports, just as `example::thing1` and `example::thing2` instead of relative to `crate::`. However, instead Rustdoc simple inlines these.
In the example given, the `bevy` crate contains `pub use bevy_internal::*` without any special `doc` attributes. You'll find that thus, `bevy::prelude` will inline all of its contents instead of showing them as re-exports, despite the fact that `bevy_internal::prelude` does show them as re-exports.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (fb4aebddd 2024-09-30)
binary: rustc
commit-hash: fb4aebddd18d258046ddb51fd41589295259a0fa
commit-date: 2024-09-30
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
``` | T-rustdoc,C-bug,A-cross-crate-reexports | low | Critical |
2,562,913,909 | pytorch | [RFC] Retiring ProcessGroupGloo's CUDA support | ### 🚀 The feature, motivation and pitch
C for consensus.
The code will still be there, and be built, and users will still be able to use it if they want to. Unless in later future we feel that a code refactorization is necessary.
But the support for such usage -- including CI, bug fixes, new features, etc -- will be stopped.
Please comment below if you have any concerns.
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,562,943,169 | ui | [bug]: Does not give proper error message when create-next fails with issue during init | ### Describe the bug

Here the error is nextjs does not allow project name to capital letter but the output give by shadcn is not specific
### Affected component/components
Shadcn cli
### How to reproduce
1. pnpm dlx shadcn@latest init
2. Give your project name with capital letters
Then you get vague error something went wrong; Instead it should propagate the error from nextjs cli
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Not imp
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,562,979,292 | react-native | Issue with TextInput borderWidth style with decimal value | ### Description
Textinput shows grey box at the right when the borderWidth style has a decimal number value
### Steps to reproduce
Run the react native app
create a Textinput
set the text input borderWidth style to 0.5
Notice the grey box at the right
### React Native Version
0.75.4
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 13.6.9
CPU: (8) x64 Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
Memory: 867.37 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.10.0
path: /usr/local/bin/node
Yarn:
version: 4.0.2
path: /usr/local/bin/yarn
npm:
version: 10.2.5
path: /usr/local/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.0
- iOS 17.0
- macOS 14.0
- tvOS 17.0
- watchOS 10.0
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.21829.142.2421.12409432
Xcode:
version: 15.0.1/15A507
path: /usr/bin/xcodebuild
Languages:
Java:
version: 21.0.3
path: /usr/bin/javac
Ruby:
version: 3.3.0
path: /usr/local/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: ^18.3.1
react-native:
installed: 0.75.4
wanted: ^0.75.4
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
None
```
### Reproducer
https://snack.expo.dev/@myckhel/caa155
### Screenshots and Videos
https://github.com/user-attachments/assets/0511307f-1f6e-4205-b97e-68d15e501ac4
| Component: TextInput,Needs: Triage :mag: | low | Minor |
2,563,028,468 | neovim | window-local corner separator depends on redraw order (with global statusline) | ### Problem
When using global statusline and window-local 'fillchars', behavior of window-local corner separator depends on redraw order.
### Steps to reproduce
1. Run `nvim --clean`
2. Run `:set laststatus=3 cursorcolumn`
3. Run `:vnew`
4. Run `:new`
5. Run `:new`
6. Run `:wincmd j`
7. Run `:setlocal fillchars+=vertleft:@`, `@` appears at the first corner separator
8. Run `:wincmd k`, `@` disappears from the first corner separator
9. Run `:wincmd j`, `@` appears at both corner separators
10. Run `:wincmd j`, `@` disappears from the second corner separator
### Expected behavior
Behavior of corner separators shouldn't depend on redraw order
### Nvim version (nvim -v)
v0.11.0-dev-888+g4075e613b2
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Arch Linux
### Terminal name/version
kitty 0.36.4
### $TERM environment variable
xterm-kitty
### Installation
build from repo | bug,ui | low | Minor |
2,563,054,321 | pytorch | DISABLED test_memory_snapshot_with_cpp (__main__.TestCudaMallocAsync) | Platforms: linux, slow, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_memory_snapshot_with_cpp&suite=TestCudaMallocAsync&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/30997145243).
Over the past 3 hours, it has been determined flaky in 26 workflow(s) with 78 failures and 26 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_memory_snapshot_with_cpp`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 3392, in test_memory_snapshot_with_cpp
self.assertTrue("::rand" in str(b["frames"]))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCudaMallocAsync.test_memory_snapshot_with_cpp
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda_expandable_segments.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,563,054,372 | pytorch | DISABLED test_memory_snapshot_script (__main__.TestCudaMallocAsync) | Platforms: linux, rocm, win, windows, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_memory_snapshot_script&suite=TestCudaMallocAsync&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/30998131540).
Over the past 3 hours, it has been determined flaky in 35 workflow(s) with 105 failures and 35 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_memory_snapshot_script`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 3592, in test_memory_snapshot_script
self.assertTrue(b["frames"][0]["name"] == "foo")
IndexError: list index out of range
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCudaMallocAsync.test_memory_snapshot_script
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,563,125,478 | pytorch | RuntimeError: Expression of type - cannot be used in a type expression: __torch__.transformers_modules.code-5p-110m-embedding.modeling_codet5p_embedding.___torch_mangle_1368.CodeT5pEmbeddingModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE | ### 🐛 Describe the bug
I use the model: https://huggingface.co/Salesforce/codet5p-110m-embedding and with `torch.jit.load`, it makes error:
```python
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained('Salesforce/codet5p-110m-embedding', trust_remote_code=True)
input_list = [tokenizer(["xxxxxx"], padding='max_length', truncation=True, max_length=100, return_tensors='pt')['input_ids']]
ts_model = torch.jit.trace(model.eval(), input_list)
torch.jit.save(ts_model, "codet5p_embedding.pt")
ts_model = torch.jit.load("codet5p_embedding.pt").cuda()
```
the error message is:
```
RuntimeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 ts_model = torch.jit.load("codet5p_embedding.pt").cuda()
2 org_out = model(input_list)
3 ts_out = ts_model(input_list)
File "/opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py", line 162, in load(f, map_location, _extra_files, _restore_shapes)
160 cu = torch._C.CompilationUnit()
161 if isinstance(f, (str, pathlib.Path)):
--> 162 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg]
163 else:
164 cpp_module = torch._C.import_ir_module_from_buffer(
165 cu, f.read(), map_location, _extra_files, _restore_shapes
166 ) # type: ignore[call-arg]
RuntimeError:
Expression of type - cannot be used in a type expression:
__torch__.transformers_modules.code-5p-110m-embedding.modeling_codet5p_embedding.___torch_mangle_1368.CodeT5pEmbeddingModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
```
### Versions
Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 3.28.4
Libc version: glibc-2.17
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L20
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.0
/usr/lib64/libcudnn_adv_infer.so.8.9.0
/usr/lib64/libcudnn_adv_train.so.8.9.0
/usr/lib64/libcudnn_cnn_infer.so.8.9.0
/usr/lib64/libcudnn_cnn_train.so.8.9.0
/usr/lib64/libcudnn_ops_infer.so.8.9.0
/usr/lib64/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.1.2
[pip3] torch-dct==0.1.6
[pip3] torchaudio==2.1.2
[pip3] torchvision==0.16.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] pytorch 2.1.2 py3.10_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-dct 0.1.6 pypi_0 pypi
[conda] torchaudio 2.1.2 py310_cu121 pytorch
[conda] torchvision 0.16.2 py310_cu121 pytorch
[conda] triton 2.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,563,128,797 | langchain | langchain moonshotChat exmaple didn't work | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_models.moonshot import MoonshotChat
from langchain_core.messages import HumanMessage, SystemMessage
import os
# Generate your api key from: https://platform.moonshot.cn/console/api-keys
os.environ["MOONSHOT_API_KEY"] = "xxxxx"
chat = MoonshotChat()
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French. I love programming."
),
]
chat.invoke(messages)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[5], line 6
2 from langchain_community.chat_models.moonshot import MoonshotChat
3 from langchain_core.messages import HumanMessage, SystemMessage
----> 6 chat = MoonshotChat()
8 messages = [
9 SystemMessage(
10 content="You are a helpful assistant that translates English to French."
(...)
14 ),
15 ]
17 response = chat.invoke(messages)
File c:\Users\eric\miniconda3\envs\myai\Lib\site-packages\langchain_core\_api\deprecation.py:213, in deprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance(self, *args, **kwargs)
211 warned = True
212 emit_warning()
--> 213 return wrapped(self, *args, **kwargs)
File c:\Users\eric\miniconda3\envs\myai\Lib\site-packages\langchain_core\load\serializable.py:111, in Serializable.__init__(self, *args, **kwargs)
109 def __init__(self, *args: Any, **kwargs: Any) -> None:
110 """"""
--> 111 super().__init__(*args, **kwargs)
File c:\Users\eric\miniconda3\envs\myai\Lib\site-packages\pydantic\main.py:212, in BaseModel.__init__(self, **data)
...
ValidationError: 1 validation error for MoonshotChat
client
Input should be a valid dictionary or instance of _MoonshotClient [type=model_type, input_value=<openai.resources.chat.co...t at 0x000001B3AB6F9220>, input_type=Completions]
For further information visit https://errors.pydantic.dev/2.9/v/model_type
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
### Description
I executed simple use cases from the official documentation, but they didn't work.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.6 | packaged by conda-forge | (main, Sep 30 2024, 17:48:58) [MSC v.1941 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.8
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.130
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.8
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,563,149,208 | transformers | Implement LlamaGen for Image Generation | ### Feature request
Add support for LlamaGen, an autoregressive image generation model, to the Transformers library. LlamaGen applies the next-token prediction paradigm of large language models to visual generation.
Paper: https://arxiv.org/abs/2406.06525
Code: https://github.com/FoundationVision/LlamaGen
Key components to implement:
1. Image tokenizer
2. Autoregressive image generation model (based on Llama architecture)
3. Class-conditional and text-conditional image generation
4. Classifier-free guidance for sampling
### Motivation
LlamaGen demonstrates that vanilla autoregressive models without vision-specific inductive biases can achieve state-of-the-art image generation performance. Implementing it in Transformers would enable easier experimentation and integration with existing language models.
### Your contribution
I can help by contributing to this model, and provide examples and detailed explanations of the model architecture and training process if needed. | New model,Feature request,Vision | low | Major |
2,563,187,788 | opencv | OpenCV DNN_TARGET_NPU not supported exception in dnn target validation | ### System Information
OpenCV version: 4.10.0
Operating System / Platform: Ubuntu 22.04
Compiler & compiler version: Java JDK 11
### Detailed description
== OnnxModel from ==
https://github.com/opencv/opencv_zoo/blob/main/models/human_segmentation_pphumanseg/human_segmentation_pphumanseg_2023mar.onnx
== Working For ==
Dnn Backend : 2 (DNN_BACKEND_INFERENCE_ENGINE)
Dnn Target : 1 (DNN_TARGET_OPENCL)
== Error When ==
Dnn Backend : 2 (DNN_BACKEND_INFERENCE_ENGINE)
Dnn Target : 9 (DNN_TARGET_NPU)
Throw following Exception :
> Unknown OpenVINO target:
> 'preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL || preferableTarget == DNN_TARGET_OPENCL_FP16 || preferableTarget == DNN_TARGET_MYRIAD || preferableTarget == DNN_TARGET_HDDL || preferableTarget == DNN_TARGET_FPGA'
> where
> '(int)preferableTarget' is 9
]
### Steps to reproduce
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Mat;
import org.opencv.dnn.Dnn;
import org.opencv.dnn.Net;
import hl.opencv.util.OpenCvUtil;
public class Test {
public static void main(String[] args)
{
OpenCvUtil.initOpenCV();
File fileOnnx = new File("./human_segmentation_pphumanseg_2023mar.onnx");
Net net = Dnn.readNetFromONNX(fileOnnx.getAbsolutePath());
net.setPreferableBackend(Dnn.DNN_BACKEND_INFERENCE_ENGINE);
net.setPreferableTarget(Dnn.DNN_TARGET_NPU);
// Run inference
List<Mat>outputs = new ArrayList<>();
net.forward(outputs, net.getUnconnectedOutLayersNames());
}
}
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | question (invalid tracker),category: dnn | low | Critical |
2,563,333,498 | go | runtime:cpu4: TestGdbAutotmpTypes failures | ```
#!watchflakes
default <- pkg == "runtime:cpu4" && test == "TestGdbAutotmpTypes"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735129364045440305)):
=== RUN TestGdbAutotmpTypes
=== PAUSE TestGdbAutotmpTypes
=== CONT TestGdbAutotmpTypes
runtime-gdb_test.go:78: gdb version 15.0
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,563,340,445 | vscode | Terminal search gives up after only 1000 search highlights | * run a script that prints `TRIES n`
* search for regex `TRIES \d`
* scroll around a little
* :bug: search matches aren't reliable
<img width="1239" alt="Screenshot 2024-10-03 at 09 34 55" src="https://github.com/user-attachments/assets/9431e122-dbc9-4283-8c2c-3854080554a8">
| bug,help wanted,upstream,perf,upstream-issue-linked,terminal-find | low | Critical |
2,563,346,303 | ui | [bug]: tailwindcss-animate should be installed in devDependencies | ### Describe the bug
tailwindcss-animate should be installed in devDependencies
### Affected component/components
no
### How to reproduce
pnpx shadcn@latest init
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macOS 15.0, pnpm 9.12.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,563,346,576 | kubernetes | device manager: potential Double-Locking of Mutex | ### What happened?
In the file [pod_devices.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/devicemanager/pod_devices.go#L101), there is a potential issue of double-locking a mutex in the function `podDevices`.
- In line [102](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/devicemanager/pod_devices.go#L102), the read lock (`pdev.RLock()`) is acquired in the `podDevices` function to ensure safe access to `pdev.devs`.
- Later, on line [107](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/devicemanager/pod_devices.go#L107), `podDevices` calls [containerDevices](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/devicemanager/pod_devices.go#L114), which also attempts to acquire the same read lock via another call to `pdev.RLock()` on line [115](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/devicemanager/pod_devices.go#L115). This may result in double-locking the same mutex within the same thread.
Even though both `podDevices` and `containerDevices` only use read locks (`RLock()`), double-locking a mutex, even for reads, can lead to deadlocks, if there is another routine trying to acquire `Lock()` in between the `RLock()`s, according to the documentation of [RWMutex](https://pkg.go.dev/sync#RWMutex).
The standard `sync.Mutex()` in **Go** is not a recursive locking implementation while it is used recursively in the `podDevices` function. You can read more on why go does not implement recursive locking [here](https://groups.google.com/g/golang-nuts/c/XqW1qcuZgKg/m/Ui3nQkeLV80J).
### What did you expect to happen?
The expectation is that a mutex should not be double-locked within the same thread. In this case, either the locking logic needs to be restructured to prevent multiple acquisitions of the same lock or `containerDevices` should not attempt to lock the mutex if it is already locked by `podDevices`.
### How can we reproduce it (as minimally and precisely as possible)?
This issue is identified through static analysis, so it cannot be directly reproduced via runtime observation. However, if left unresolved, it could lead to unpredictable behavior in environments where recursive read locks are not supported.
### Anything else we need to know?
Sponsorship and Support:
This work is done by the security researchers from OpenRefactory and is supported by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/): [Project Alpha-Omega](https://alpha-omega.dev/). Alpha-Omega is a project partnering with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code - and get them fixed – to improve global software supply chain security.
The bug is found by running the Intelligent Code Repair (iCR) tool by [OpenRefactory, Inc.](https://openrefactory.com/) and then manually triaging the results.
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
*No response*
### Install tools
*No response*
### Container runtime (CRI) and version (if applicable)
*No response*
### Related plugins (CNI, CSI, ...) and versions (if applicable)
*No response* | kind/bug,sig/node,priority/important-longterm,triage/accepted | low | Critical |
2,563,365,847 | PowerToys | a shortcut that allows you open a new tab with selected text searched in google or any preferred search website | ### Description of the new feature / enhancement
a key combination shortcut that allows you open a new tab with selected text searched in google or any preferred search website
### Scenario when this would be used?
when you look for anything in a webpage and want to search in google in a new tab
### Supporting information
any googling situation | Idea-Enhancement,Needs-Triage | low | Minor |
2,563,383,147 | vscode | test: waitThrottleDelayBetweenWorkUnits is flaky | https://dev.azure.com/monacotools/Monaco/_build/results?buildId=296767&view=logs&j=78322c78-e078-5e20-184d-1b2bb6aed118&t=6d18ead1-26fd-5e07-64e3-cefc56c57b3e
Also see https://github.com/microsoft/vscode/pull/230335 | bug,unit-test-failure | low | Major |
2,563,443,446 | flutter | [iOS build] Add support for xcodebuild derivedDataPath flag | ### Use case
I want to run multiple parallel builds on my Mac Studio Runner machine.
By setting `derivedDataPath` I expect to no longer use `~/Library/Developer/Xcode/DerivedData` and so avoid issue about it.
Bellow command sadly didn't work
```console
export FLUTTER_XCODE_DERIVEDDATA_PATH=DerivedData ; flutter build ipa
```
### Proposal
Maybe same proposal as https://github.com/flutter/flutter/issues/121702 by adding an environnement variable. | c: new feature,platform-ios,tool,t: xcode,c: proposal,a: build,P3,team-ios,triaged-ios | low | Minor |
2,563,450,232 | excalidraw | Support angle constrained freedraw (Shift + draw straight line in discrete angles) just like with the line tool | Holding Shift and creating a line tool constrains the line to a discrete angle. It would be ideal to have a similar functionality for freedraw elements, where when Shift is held, the freedraw drawing is straight and constrained to the same discrete angles, like with the Shift + line tool. It would mirror the behavior in Photoshop. | good first issue | low | Major |
2,563,461,677 | PowerToys | New+ is incorrectly translated to "Novità+" in Italian | ### Microsoft PowerToys version
0.85
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
New+
### Steps to reproduce
Just run PowerToys in Italian:

### ✔️ Expected Behavior
**New+** should be **Nuovo+** in Italian.
### ❌ Actual Behavior
**New+** is incorrectly translated as **Novità+** in Italian.
### Other Software
_No response_ | Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation,Product-New+ | low | Minor |
2,563,478,065 | rust | Suggestion: debug-assert that the allocator returns aligned memory | If the global allocator returns unaligned memory, the standard library catches it very late. Take this code:
```rs
#[global_allocator]
static GLOBAL: AccountingAllocator<mimalloc::MiMalloc> =
AccountingAllocator::new(mimalloc::MiMalloc);
#[repr(C, align(256))]
pub struct BigStruct { … }
fn collect() {
assert_eq!(align_of::<BigStruct>(), 256);
assert_eq!(size_of::<BigStruct>(), 256);
let vec: Vec<T> = std::iter::once(BigStruct{…}).collect();
dbg!(vec.as_ptr() as usize % std::mem::align_of::<T>()); // Sometimes prints 64
let slice = vec.as_slice(); // panics in debug builds with:
// 'unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX`'
}
```
Because of [a bug in MiMalloc](https://github.com/purpleprotocol/mimalloc_rust/issues/128), the `Vec` gets non-aligned memory.
This is not discovered until `vec.as_slice` calls `slice::from_raw_parts` which has a debug-assert.
I spent a lot of time looking for this bug before I realized it was MiMalloc that was to blame.
I suggest we add a debug-assert the standard library to check that the allocator returns aligned memory, i.e. here:
https://github.com/rust-lang/rust/blob/8885239786c9efe5c6077de65536a5e092e34a55/library/alloc/src/raw_vec.rs#L442-L478
That would then report "The global allocator returned non-aligned memory", directly pointing the user at the problem, saving a lol of time for the next person. | C-feature-request,T-libs,A-align | low | Critical |
2,563,485,382 | pytorch | `import torch` takes forever | ### 🐛 Describe the bug
Hi! I am starting to learn and use PyTorch. However, it has one problem. Import torch is taking straight up forever.
I wrote a StackOverflow question describing my issue: https://stackoverflow.com/questions/79049542/pytorch-import-stuck-on-torch-version.
I don't know how, why, or what caused it to happen. I tried reinstalling torch (both a current and an older version), reloading VSCode window, and all sorts of things. This happens to me on both Linux and Windows, and I don't know why it happens, but it is very frustrating. It is even crazier when I could run it some time ago, but now it completely refuses to work and I don't know why.
And no, the collect_env.py file doesn't work. It imports torch, and of course I just can't.
Here are some additional information for those who want to investigate:
My `requirements.txt` file for the venv I am running in:
```
--index-url https://download.pytorch.org/whl/cpu
--extra-index-url https://pypi.org/simple
torch==2.3.1
torchinfo
torchvision
pytorch-lightning
numpy
matplotlib
pandas
scikit-learn
torchinfo
ISLP
ipykernel
ipython
jupyter
```
The code I was trying to run:
```
import torch
x = torch.rand(5, 3)
print(x)
```
The log for `python -v -c "import torch"`:
```
import _frozen_importlib # frozen
import _imp # builtin
import '_thread' # <class '_frozen_importlib.BuiltinImporter'>
import '_warnings' # <class '_frozen_importlib.BuiltinImporter'>
import '_weakref' # <class '_frozen_importlib.BuiltinImporter'>
import '_io' # <class '_frozen_importlib.BuiltinImporter'>
import 'marshal' # <class '_frozen_importlib.BuiltinImporter'>
import 'posix' # <class '_frozen_importlib.BuiltinImporter'>
import '_frozen_importlib_external' # <class '_frozen_importlib.FrozenImporter'>
# installing zipimport hook
import 'time' # <class '_frozen_importlib.BuiltinImporter'>
import 'zipimport' # <class '_frozen_importlib.FrozenImporter'>
# installed zipimport hook
# /usr/lib/python3.12/encodings/__pycache__/__init__.cpython-312.pyc matches /usr/lib/python3.12/encodings/__init__.py
# code object from '/usr/lib/python3.12/encodings/__pycache__/__init__.cpython-312.pyc'
import '_codecs' # <class '_frozen_importlib.BuiltinImporter'>
import 'codecs' # <class '_frozen_importlib.FrozenImporter'>
# /usr/lib/python3.12/encodings/__pycache__/aliases.cpython-312.pyc matches /usr/lib/python3.12/encodings/aliases.py
# code object from '/usr/lib/python3.12/encodings/__pycache__/aliases.cpython-312.pyc'
import 'encodings.aliases' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7fe15e0>
import 'encodings' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7fe0c20>
# /usr/lib/python3.12/encodings/__pycache__/utf_8.cpython-312.pyc matches /usr/lib/python3.12/encodings/utf_8.py
# code object from '/usr/lib/python3.12/encodings/__pycache__/utf_8.cpython-312.pyc'
import 'encodings.utf_8' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7fe3b60>
import '_signal' # <class '_frozen_importlib.BuiltinImporter'>
import '_abc' # <class '_frozen_importlib.BuiltinImporter'>
import 'abc' # <class '_frozen_importlib.FrozenImporter'>
import 'io' # <class '_frozen_importlib.FrozenImporter'>
import '_stat' # <class '_frozen_importlib.BuiltinImporter'>
import 'stat' # <class '_frozen_importlib.FrozenImporter'>
import '_collections_abc' # <class '_frozen_importlib.FrozenImporter'>
import 'genericpath' # <class '_frozen_importlib.FrozenImporter'>
import 'posixpath' # <class '_frozen_importlib.FrozenImporter'>
import 'os' # <class '_frozen_importlib.FrozenImporter'>
import '_sitebuiltins' # <class '_frozen_importlib.FrozenImporter'>
Processing global site-packages
Adding directory: '/home/minhduc/Documents/ML/lib/python3.12/site-packages'
Processing .pth file: '/home/minhduc/Documents/ML/lib/python3.12/site-packages/distutils-precedence.pth'
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/_distutils_hack/__pycache__/__init__.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/_distutils_hack/__init__.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/_distutils_hack/__pycache__/__init__.cpython-312.pyc'
import '_distutils_hack' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7fff140>
Processing user site-packages
Processing global site-packages
Adding directory: '/home/minhduc/Documents/ML/lib/python3.12/site-packages'
Processing .pth file: '/home/minhduc/Documents/ML/lib/python3.12/site-packages/distutils-precedence.pth'
# /usr/lib/python3.12/__pycache__/sitecustomize.cpython-312.pyc matches /usr/lib/python3.12/sitecustomize.py
# code object from '/usr/lib/python3.12/__pycache__/sitecustomize.cpython-312.pyc'
import 'sitecustomize' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7ffe180>
import 'site' # <class '_frozen_importlib.FrozenImporter'>
Python 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/__init__.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__init__.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/__init__.cpython-312.pyc'
import 'math' # <class '_frozen_importlib.BuiltinImporter'>
# /usr/lib/python3.12/__pycache__/platform.cpython-312.pyc matches /usr/lib/python3.12/platform.py
# code object from '/usr/lib/python3.12/__pycache__/platform.cpython-312.pyc'
# /usr/lib/python3.12/collections/__pycache__/__init__.cpython-312.pyc matches /usr/lib/python3.12/collections/__init__.py
# code object from '/usr/lib/python3.12/collections/__pycache__/__init__.cpython-312.pyc'
import 'itertools' # <class '_frozen_importlib.BuiltinImporter'>
# /usr/lib/python3.12/__pycache__/keyword.cpython-312.pyc matches /usr/lib/python3.12/keyword.py
# code object from '/usr/lib/python3.12/__pycache__/keyword.cpython-312.pyc'
import 'keyword' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b5f560>
# /usr/lib/python3.12/__pycache__/operator.cpython-312.pyc matches /usr/lib/python3.12/operator.py
# code object from '/usr/lib/python3.12/__pycache__/operator.cpython-312.pyc'
import '_operator' # <class '_frozen_importlib.BuiltinImporter'>
import 'operator' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b5fbc0>
# /usr/lib/python3.12/__pycache__/reprlib.cpython-312.pyc matches /usr/lib/python3.12/reprlib.py
# code object from '/usr/lib/python3.12/__pycache__/reprlib.cpython-312.pyc'
import 'reprlib' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b74560>
import '_collections' # <class '_frozen_importlib.BuiltinImporter'>
import 'collections' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b4ef00>
# /usr/lib/python3.12/re/__pycache__/__init__.cpython-312.pyc matches /usr/lib/python3.12/re/__init__.py
# code object from '/usr/lib/python3.12/re/__pycache__/__init__.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/enum.cpython-312.pyc matches /usr/lib/python3.12/enum.py
# code object from '/usr/lib/python3.12/__pycache__/enum.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/types.cpython-312.pyc matches /usr/lib/python3.12/types.py
# code object from '/usr/lib/python3.12/__pycache__/types.cpython-312.pyc'
import 'types' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b4f9b0>
# /usr/lib/python3.12/__pycache__/functools.cpython-312.pyc matches /usr/lib/python3.12/functools.py
# code object from '/usr/lib/python3.12/__pycache__/functools.cpython-312.pyc'
import '_functools' # <class '_frozen_importlib.BuiltinImporter'>
import 'functools' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b4f590>
import 'enum' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b76c90>
# /usr/lib/python3.12/re/__pycache__/_compiler.cpython-312.pyc matches /usr/lib/python3.12/re/_compiler.py
# code object from '/usr/lib/python3.12/re/__pycache__/_compiler.cpython-312.pyc'
import '_sre' # <class '_frozen_importlib.BuiltinImporter'>
# /usr/lib/python3.12/re/__pycache__/_parser.cpython-312.pyc matches /usr/lib/python3.12/re/_parser.py
# code object from '/usr/lib/python3.12/re/__pycache__/_parser.cpython-312.pyc'
# /usr/lib/python3.12/re/__pycache__/_constants.cpython-312.pyc matches /usr/lib/python3.12/re/_constants.py
# code object from '/usr/lib/python3.12/re/__pycache__/_constants.cpython-312.pyc'
import 're._constants' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7bce0f0>
import 're._parser' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7bccd10>
# /usr/lib/python3.12/re/__pycache__/_casefix.cpython-312.pyc matches /usr/lib/python3.12/re/_casefix.py
# code object from '/usr/lib/python3.12/re/__pycache__/_casefix.cpython-312.pyc'
import 're._casefix' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7bce240>
import 're._compiler' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7ba3800>
# /usr/lib/python3.12/__pycache__/copyreg.cpython-312.pyc matches /usr/lib/python3.12/copyreg.py
# code object from '/usr/lib/python3.12/__pycache__/copyreg.cpython-312.pyc'
import 'copyreg' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7bcf410>
import 're' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b5cd10>
import 'platform' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b4c920>
# /usr/lib/python3.12/__pycache__/textwrap.cpython-312.pyc matches /usr/lib/python3.12/textwrap.py
# code object from '/usr/lib/python3.12/__pycache__/textwrap.cpython-312.pyc'
import 'textwrap' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7b4ec90>
# /usr/lib/python3.12/ctypes/__pycache__/__init__.cpython-312.pyc matches /usr/lib/python3.12/ctypes/__init__.py
# code object from '/usr/lib/python3.12/ctypes/__pycache__/__init__.cpython-312.pyc'
# extension module '_ctypes' loaded from '/usr/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so'
# extension module '_ctypes' executed from '/usr/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so'
import '_ctypes' # <_frozen_importlib_external.ExtensionFileLoader object at 0x7357b7be9bb0>
# /usr/lib/python3.12/__pycache__/struct.cpython-312.pyc matches /usr/lib/python3.12/struct.py
# code object from '/usr/lib/python3.12/__pycache__/struct.cpython-312.pyc'
import '_struct' # <class '_frozen_importlib.BuiltinImporter'>
import 'struct' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7bea330>
# /usr/lib/python3.12/ctypes/__pycache__/_endian.cpython-312.pyc matches /usr/lib/python3.12/ctypes/_endian.py
# code object from '/usr/lib/python3.12/ctypes/__pycache__/_endian.cpython-312.pyc'
import 'ctypes._endian' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7beae10>
import 'ctypes' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7be83b0>
# /usr/lib/python3.12/__pycache__/inspect.cpython-312.pyc matches /usr/lib/python3.12/inspect.py
# code object from '/usr/lib/python3.12/__pycache__/inspect.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/ast.cpython-312.pyc matches /usr/lib/python3.12/ast.py
# code object from '/usr/lib/python3.12/__pycache__/ast.cpython-312.pyc'
import '_ast' # <class '_frozen_importlib.BuiltinImporter'>
# /usr/lib/python3.12/__pycache__/contextlib.cpython-312.pyc matches /usr/lib/python3.12/contextlib.py
# code object from '/usr/lib/python3.12/__pycache__/contextlib.cpython-312.pyc'
import 'contextlib' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7178530>
import 'ast' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b711b4a0>
# /usr/lib/python3.12/__pycache__/dis.cpython-312.pyc matches /usr/lib/python3.12/dis.py
# code object from '/usr/lib/python3.12/__pycache__/dis.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/opcode.cpython-312.pyc matches /usr/lib/python3.12/opcode.py
# code object from '/usr/lib/python3.12/__pycache__/opcode.cpython-312.pyc'
import '_opcode' # <class '_frozen_importlib.BuiltinImporter'>
import 'opcode' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b717bfe0>
import 'dis' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71405f0>
# /usr/lib/python3.12/collections/__pycache__/abc.cpython-312.pyc matches /usr/lib/python3.12/collections/abc.py
# code object from '/usr/lib/python3.12/collections/__pycache__/abc.cpython-312.pyc'
import 'collections.abc' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71a2330>
# /usr/lib/python3.12/importlib/__pycache__/__init__.cpython-312.pyc matches /usr/lib/python3.12/importlib/__init__.py
# code object from '/usr/lib/python3.12/importlib/__pycache__/__init__.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/warnings.cpython-312.pyc matches /usr/lib/python3.12/warnings.py
# code object from '/usr/lib/python3.12/__pycache__/warnings.cpython-312.pyc'
import 'warnings' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71a1df0>
import 'importlib' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71a2180>
import 'importlib.machinery' # <class '_frozen_importlib.FrozenImporter'>
# /usr/lib/python3.12/__pycache__/linecache.cpython-312.pyc matches /usr/lib/python3.12/linecache.py
# code object from '/usr/lib/python3.12/__pycache__/linecache.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/tokenize.cpython-312.pyc matches /usr/lib/python3.12/tokenize.py
# code object from '/usr/lib/python3.12/__pycache__/tokenize.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/token.cpython-312.pyc matches /usr/lib/python3.12/token.py
# code object from '/usr/lib/python3.12/__pycache__/token.cpython-312.pyc'
import 'token' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71bd6d0>
import '_tokenize' # <class '_frozen_importlib.BuiltinImporter'>
import 'tokenize' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71bc2c0>
import 'linecache' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71a3f20>
import 'inspect' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7be9010>
# /usr/lib/python3.12/__pycache__/threading.cpython-312.pyc matches /usr/lib/python3.12/threading.py
# code object from '/usr/lib/python3.12/__pycache__/threading.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/_weakrefset.cpython-312.pyc matches /usr/lib/python3.12/_weakrefset.py
# code object from '/usr/lib/python3.12/__pycache__/_weakrefset.cpython-312.pyc'
import '_weakrefset' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71f53a0>
import 'threading' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71bc1d0>
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/_utils.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_utils.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/_utils.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/traceback.cpython-312.pyc matches /usr/lib/python3.12/traceback.py
# code object from '/usr/lib/python3.12/__pycache__/traceback.cpython-312.pyc'
import 'traceback' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b701c590>
# /usr/lib/python3.12/__pycache__/typing.cpython-312.pyc matches /usr/lib/python3.12/typing.py
# code object from '/usr/lib/python3.12/__pycache__/typing.cpython-312.pyc'
import '_typing' # <class '_frozen_importlib.BuiltinImporter'>
import 'typing' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b701e780>
import 'torch._utils' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71f6c60>
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/_utils_internal.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_utils_internal.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/_utils_internal.cpython-312.pyc'
# /usr/lib/python3.12/logging/__pycache__/__init__.cpython-312.pyc matches /usr/lib/python3.12/logging/__init__.py
# code object from '/usr/lib/python3.12/logging/__pycache__/__init__.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/weakref.cpython-312.pyc matches /usr/lib/python3.12/weakref.py
# code object from '/usr/lib/python3.12/__pycache__/weakref.cpython-312.pyc'
import 'weakref' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7095c40>
# /usr/lib/python3.12/__pycache__/string.cpython-312.pyc matches /usr/lib/python3.12/string.py
# code object from '/usr/lib/python3.12/__pycache__/string.cpython-312.pyc'
import '_string' # <class '_frozen_importlib.BuiltinImporter'>
import 'string' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7097200>
import 'atexit' # <class '_frozen_importlib.BuiltinImporter'>
import 'logging' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b7072330>
# /usr/lib/python3.12/__pycache__/tempfile.cpython-312.pyc matches /usr/lib/python3.12/tempfile.py
# code object from '/usr/lib/python3.12/__pycache__/tempfile.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/shutil.cpython-312.pyc matches /usr/lib/python3.12/shutil.py
# code object from '/usr/lib/python3.12/__pycache__/shutil.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/fnmatch.cpython-312.pyc matches /usr/lib/python3.12/fnmatch.py
# code object from '/usr/lib/python3.12/__pycache__/fnmatch.cpython-312.pyc'
import 'fnmatch' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70c7fe0>
import 'errno' # <class '_frozen_importlib.BuiltinImporter'>
import 'zlib' # <class '_frozen_importlib.BuiltinImporter'>
# /usr/lib/python3.12/__pycache__/bz2.cpython-312.pyc matches /usr/lib/python3.12/bz2.py
# code object from '/usr/lib/python3.12/__pycache__/bz2.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/_compression.cpython-312.pyc matches /usr/lib/python3.12/_compression.py
# code object from '/usr/lib/python3.12/__pycache__/_compression.cpython-312.pyc'
import '_compression' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70fa5d0>
# extension module '_bz2' loaded from '/usr/lib/python3.12/lib-dynload/_bz2.cpython-312-x86_64-linux-gnu.so'
# extension module '_bz2' executed from '/usr/lib/python3.12/lib-dynload/_bz2.cpython-312-x86_64-linux-gnu.so'
import '_bz2' # <_frozen_importlib_external.ExtensionFileLoader object at 0x7357b70fac00>
import 'bz2' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70f9b20>
# /usr/lib/python3.12/__pycache__/lzma.cpython-312.pyc matches /usr/lib/python3.12/lzma.py
# code object from '/usr/lib/python3.12/__pycache__/lzma.cpython-312.pyc'
# extension module '_lzma' loaded from '/usr/lib/python3.12/lib-dynload/_lzma.cpython-312-x86_64-linux-gnu.so'
# extension module '_lzma' executed from '/usr/lib/python3.12/lib-dynload/_lzma.cpython-312-x86_64-linux-gnu.so'
import '_lzma' # <_frozen_importlib_external.ExtensionFileLoader object at 0x7357b70fb680>
import 'lzma' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70fadb0>
import 'shutil' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70c5f10>
# /usr/lib/python3.12/__pycache__/random.cpython-312.pyc matches /usr/lib/python3.12/random.py
# code object from '/usr/lib/python3.12/__pycache__/random.cpython-312.pyc'
# /usr/lib/python3.12/__pycache__/bisect.cpython-312.pyc matches /usr/lib/python3.12/bisect.py
# code object from '/usr/lib/python3.12/__pycache__/bisect.cpython-312.pyc'
import '_bisect' # <class '_frozen_importlib.BuiltinImporter'>
import 'bisect' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f09730>
import '_random' # <class '_frozen_importlib.BuiltinImporter'>
import '_sha2' # <class '_frozen_importlib.BuiltinImporter'>
import 'random' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70c65d0>
import 'tempfile' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b70c4470>
import 'torch._utils_internal' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b71f7b90>
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/torch_version.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/torch_version.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/torch_version.cpython-312.pyc'
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/version.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/version.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/__pycache__/version.cpython-312.pyc'
import 'torch.version' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f09fd0>
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/__pycache__/__init__.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/__init__.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/__pycache__/__init__.cpython-312.pyc'
import 'torch._vendor' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f0a1b0>
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__pycache__/__init__.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__init__.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__pycache__/__init__.cpython-312.pyc'
import 'torch._vendor.packaging' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f0a060>
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__pycache__/version.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/version.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__pycache__/version.cpython-312.pyc'
# /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__pycache__/_structures.cpython-312.pyc matches /home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/_structures.py
# code object from '/home/minhduc/Documents/ML/lib/python3.12/site-packages/torch/_vendor/packaging/__pycache__/_structures.cpython-312.pyc'
import 'torch._vendor.packaging._structures' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f0b1d0>
import 'torch._vendor.packaging.version' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f0a330>
import 'torch.torch_version' # <_frozen_importlib_external.SourceFileLoader object at 0x7357b6f09d00>
```
### Versions
`import torch` gets stuck so I'll do `pip list` instead
```
Package Version
------------------------- --------------
aiohappyeyeballs 2.4.3
aiohttp 3.10.8
aiosignal 1.3.1
anyio 4.6.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
async-lru 2.0.4
attrs 24.2.0
autograd 1.7.0
autograd-gamma 0.5.0
babel 2.16.0
beautifulsoup4 4.12.3
bleach 6.1.0
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.3.2
comm 0.2.2
contourpy 1.3.0
cycler 0.12.1
debugpy 1.8.6
decorator 5.1.1
defusedxml 0.7.1
executing 2.1.0
fastjsonschema 2.20.0
filelock 3.16.1
fonttools 4.54.1
formulaic 1.0.2
fqdn 1.5.1
frozenlist 1.4.1
fsspec 2024.9.0
h11 0.14.0
httpcore 1.0.6
httpx 0.27.2
idna 3.10
interface-meta 1.3.0
ipykernel 6.29.5
ipython 8.28.0
ipywidgets 8.1.5
ISLP 0.4.0
isoduration 20.11.0
jedi 0.19.1
Jinja2 3.1.4
joblib 1.4.2
json5 0.9.25
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2023.12.1
jupyter 1.1.1
jupyter_client 8.6.3
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.5
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
kiwisolver 1.4.7
lifelines 0.29.0
lightning-utilities 0.11.7
lxml 5.3.0
MarkupSafe 2.1.5
matplotlib 3.9.2
matplotlib-inline 0.1.7
mistune 3.0.2
mpmath 1.3.0
multidict 6.1.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.3
notebook 7.2.2
notebook_shim 0.2.4
numpy 1.26.4
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.20.5
nvidia-nvjitlink-cu12 12.6.77
nvidia-nvtx-cu12 12.1.105
overrides 7.7.0
packaging 24.1
pandas 2.2.3
pandocfilters 1.5.1
parso 0.8.4
patsy 0.5.6
pexpect 4.9.0
pillow 10.4.0
pip 24.0
platformdirs 4.3.6
progressbar2 4.5.0
prometheus_client 0.21.0
prompt_toolkit 3.0.48
psutil 6.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
pycparser 2.22
pygam 0.9.1
Pygments 2.18.0
pyparsing 3.1.4
python-dateutil 2.9.0.post0
python-json-logger 2.0.7
python-utils 3.9.0
pytorch-lightning 2.4.0
pytz 2024.2
PyYAML 6.0.2
pyzmq 26.2.0
referencing 0.35.1
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.20.0
scikit-learn 1.5.2
scipy 1.11.4
Send2Trash 1.8.3
setuptools 75.1.0
six 1.16.0
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
statsmodels 0.14.3
sympy 1.13.3
terminado 0.18.1
threadpoolctl 3.5.0
tinycss2 1.3.0
torch 2.3.1+cpu
torchinfo 1.8.0
torchmetrics 1.4.2
torchvision 0.18.1+cpu
tornado 6.4.1
tqdm 4.66.5
traitlets 5.14.3
triton 3.0.0
types-python-dateutil 2.9.0.20241003
typing_extensions 4.12.2
tzdata 2024.2
uri-template 1.3.0
urllib3 2.2.3
wcwidth 0.2.13
webcolors 24.8.0
webencodings 0.5.1
websocket-client 1.8.0
widgetsnbextension 4.0.13
wrapt 1.16.0
yarl 1.13.1
``` | needs reproduction,triaged | low | Critical |
2,563,529,688 | godot | Exporting 3D projects to Raspberry Pi doesn't seem to work, ERROR: No loader found for resource: (expected type: CompressedTexture2D) | ### Tested versions
4.2 and 4.3
### System information
windows on X86 and linux on arm
### Issue description
See title.
### Steps to reproduce
Just try exporting to linux on arm the app will not launch.
### Minimal reproduction project (MRP)
Take any project and export to linux on arm. | bug,platform:linuxbsd,needs testing,topic:import,topic:export | low | Critical |
2,563,571,951 | ollama | [prompt] add ollama configuration file | I think we can now try eating our own dog food, and let LLM write the code to solve [second most voted](https://github.com/ollama/ollama/issues?q=config+file+is%3Aopen+sort%3Areactions-%2B1-desc) issue "Please don't clutter the user home directory" (https://github.com/ollama/ollama/issues/228).
Here is my try at prompting.
---
Please write the code to add config file to `ollama` project. The config provides lookup values for `envconfig` variables. The file should be in standard OS location, which is `~/.config/ollama/config.toml` on Linux.
The code should be commented, and variable lookup order printed in debug mode.
An example config file should be included with all variables commented, and set to their default values.
--- | feature request | low | Critical |
2,563,583,391 | deno | Better DX by referring URL link for common or known errors in the terminal | ## Issue
I have the following code,
```ts
import data from './data.json'
console.log(data);
```
And I get following error as output in the terminal
```bash
> deno main.ts
error: Expected a JavaScript or TypeScript module, but identified a Json module. Consider importing Json modules with an import attribute with the type of "json".
Specifier: file:///<path>/data.json
at file:///<path>/main.ts:1:18
```
## Requested change
I had to search to find the link which shows the solution with an example, https://docs.deno.com/examples/importing-json/ which I expect to be part of the error output in the terminal.
```bash
> deno main.ts
error: Expected a JavaScript or TypeScript module, but identified a Json module. Consider importing Json modules with an import attribute with the type of "json".
Refer: https://docs.deno.com/examples/importing-json/
Specifier: file:///<path>/data.json
at file:///<path>/main.ts:1:18
```
Could something like this be added for all known errors/warnings. Thanks. | docs,suggestion | low | Critical |
2,563,617,060 | TypeScript | ScriptKind.Deferred treated as LanguageVariant.Standard | ### 🔎 Search Terms
Deferred ScriptKind TSX
Deferred ScriptKind Standard
Vue, eslint, tsx parser error
### 🕗 Version & Regression Information
Currently looking at TypeScript 5.5+ likely affects most versions of typescript.
### ⏯ Playground Link
_No response_
### 💻 Code
Not specific to TS code but compiler and parsing.
### 🙁 Actual behavior
When using typescript-eslint with there new projectSettings option, along side vue with tsx. The file fails to parse
```
Component.vue
41:8 error Parsing error: '>' expected
```
Deferred ScriptKinds are treated as `LanguageVariant.Standard` rather than `LanguageVariant.JSX`
https://github.com/microsoft/TypeScript/blob/main/src/compiler/parser.ts#L1746
https://github.com/microsoft/TypeScript/blob/main/src/compiler/utilities.ts#L8832
### 🙂 Expected behavior
Under the assumption that ScriptKind.Deferred could be treated as `LanguageVariant.JSX` then this resolves `typescript-eslints` issue with parsing extraFileExtensions such as `vue` files.
I'm unsure what the consequences and other use cases of `ScriptKind.Deferred` are so it could be as simple as adjusting `getLanguageVariant`.
### Additional information about the issue
https://github.com/typescript-eslint/typescript-eslint/issues/9934 | Suggestion,Experimentation Needed | low | Critical |
2,563,660,172 | flutter | [iOS] Running async dart code as part of `applicationWillTerminate` | ### Use case
According to this document: https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1623111-applicationwillterminate
We could have a function that takes up to five seconds during the `applicationWillTerminate` callback. I agree that doing complicated things there should be wrong but I think having the ability to do something is reasonable.
For exmaple, one might want to schedule a notification when a app is closed during a critical process (I believe the teala app will send a notification on close if the app is used as key).
In flutter, there is [notification_when_app_is_killed](https://pub.dev/packages/notification_when_app_is_killed) which can achieve this. However, I think its implementation is not ideal, it basically implemented eveything in swift: Overriding the `applicationWillTerminate` callback and send a notification when it is called. It is weird becuae it is not generic, also we have nice binding of notification in dart but writing swift seems defeat the point of using flutter. I think ideally, `applicationWillTerminate` should trigger a dart async function and wait for it. Then we could send notification or do other things in that function.
I found this issue: https://github.com/flutter/flutter/issues/96779, I think the idea there kinda work but does not really.
It is possible to use `FlutterMethodChannel` to call a dart function when `applicationWillTerminate` is called. However, I can't block the `applicationWillTerminate` and wait for that dart invocation to be finished. If my understanding is correct, doing so will block the main thread and `FlutterMethodChannel` requires main thread?
### Proposal
I naively think, `WidgetBindingObserver` should support this, `applicationWillTerminate` should trigger a change in the life cycle and `applicationWillTerminate` should be blocked until all `WidgetBindingObserver` callbacks are finished. | c: new feature,platform-ios,framework,engine,c: proposal,P2,team-ios,triaged-ios | low | Minor |
2,563,711,453 | angular | Provide example in documentation about ngComponentOutletContent | ### Describe the problem that you experienced
There are no example about ngComponentOutletContent property, please provide in documenatation !
### Enter the URL of the topic with the problem
_No response_
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
_No response_
### If the problem is browser-specific, please specify the device, OS, browser, and version
_No response_
### Provide any additional information here in as much as detail as you can
_No response_ | help wanted,good first issue,area: docs | low | Critical |
2,563,724,444 | godot | TLS handshake error in AssetLib | ### Tested versions
- Reproducible in v4.4.dev2.official [97ef3c837], v4.4.dev3.official [f4af8201b] and later. (error -110 and -28928)
- In v4.3.stable.official [77dcf97d8], v4.4.dev1.official [28a72fa43]. It gives different error. (error -9984)
### System information
Godot v4.4.dev2 - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 31.0.15.3623) - AMD Ryzen 5 5600X 6-Core Processor (12 Threads)
### Issue description
When change pages in Asset Store, it will sometimes give TLS handshake error.

~And sometimes the editor will crash.~ (This is unrelated and have been fixed)
Computer network connection is fine and no proxy used.
Probably related to #79162 .
### Steps to reproduce
- Create an empty project.
- Click AssetLib
- (Change pages)
- wait some time
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,confirmed,topic:assetlib,needs testing | low | Critical |
2,563,750,360 | deno | Shebangs fail to parse in `deno test --doc` | This should not result in a syntax error:
```ts
/**
* Documentation of my function.
*
* @example Usage
* ```ts
* #!/usr/bin/env -S deno run --allow-read
* foo("bar")
* ```
*/
export function foo(s:string) {
return Deno.readTextFileSync(import.meta.filename!)
}
```
| bug,testing | low | Critical |
2,563,761,129 | opencv | ts: ROI cases are not being covered by ArrayTest | `ArrayTest` class have two storages for arrays: `vector<vector<void*>> test_array` for old functions and `vector<vector<Mat>> test_mat` for new functions. The first one is being filled with data and the second one is copied from the first:
https://github.com/opencv/opencv/blob/783fe72756a5ff6041c446599a96a766a0681d06/modules/ts/src/ts_arrtest.cpp#L210-L222
However, the function `cvarrToMat` being used to convert old array to `Mat` clears the ROI indicator and creates plain object instead (`data == datastart`, `dataend` unset or at the end). Example for `IplImage`: https://github.com/opencv/opencv/blob/783fe72756a5ff6041c446599a96a766a0681d06/modules/core/src/matrix_c.cpp#L109-L129
Thus all ArrayTest-based tests using `test_mat` and new functions will effectively miss ROI case. For example `Core_MulSpectrums.accuracy`. Here neither of `src1`, `src2` or `dst` will be ROI Mat in any case: https://github.com/opencv/opencv/blob/783fe72756a5ff6041c446599a96a766a0681d06/modules/core/test/test_dxt.cpp#L822
| test | low | Minor |
2,563,788,694 | transformers | How to implement weight decay towards the pre-trained model? | Hello, let me one question.
If using HF Trainer for supervised fune-tuning, how do I implement penalizing the distance between starting and current weights? This was shown to be effective in https://arxiv.org/abs/1706.03610 | Usage,Feature request | low | Major |
2,563,824,886 | godot | window ignores min_size set from DisplayServer if wrap_content is set and scaling not 1 | ### Tested versions
Reproducible in 4.3-stable
### System information
Godot v4.3.stable - Pop!_OS 22.04 LTS - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 580 Series (RADV POLARIS10) - AMD Ryzen 7 7700X 8-Core Processor (16 Threads)
### Issue description
There are several inconsistencies and bugs with the way godot determines the minimum and current window size.
Here, a scaling factor of 2 causes godot to discard the values set from DisplayServer.window_set_min_size, as soon as wrap_content is set to true.
This issue only exists when a scaling factor is set in the project settings (or gdscript i guess)
### Steps to reproduce
1. Move the Slider labeled `DisplayServer.window_set_min_size` all the way to the right.
2. Toggle the `wrap_controls` button
3. Observe as the window size snaps to the smaller size
(if testing in 4.3, ignore the presence of #91960)
### Minimal reproduction project (MRP)
[resize_mwe.zip](https://github.com/user-attachments/files/17243436/resize_mwe.zip)
| bug,documentation,topic:gui | low | Critical |
2,563,831,234 | pytorch | list index out of range in cpp_extension.py | ### 🐛 Describe the bug
I am trying to build a pip package that depends on torch to build, with
```
cat pyproject.toml
[build-system]
requires = ["torch"]
```
The package builds a CUDA C++ extension, and calls torck/utils/cpp_extension.py, but hits a list index out of range error:
```
Processing /home/jacob/dev/fused-ssim
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: fused_ssim
Building wheel for fused_ssim (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for fused_ssim (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [94 lines of output]
/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py:258: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/cuda/__init__.py:128: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running bdist_wheel
running build
running build_py
copying fused_ssim/__init__.py -> build/lib.linux-x86_64-cpython-312/fused_ssim
running build_ext
/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py:414: UserWarning: The detected CUDA version (12.6) has a minor version mismatch with the version that was used to compile PyTorch (12.1). Most likely this shouldn't be a problem.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no g++ version bounds defined for CUDA version 12.6
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
building 'fused_ssim_cuda' extension
/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py:1965: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/cuda/__init__.py:654: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Traceback (most recent call last):
File "/home/jacob/dev/vertigo-ai/train/env/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/jacob/dev/vertigo-ai/train/env/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jacob/dev/vertigo-ai/train/env/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 421, in build_wheel
return self._build_with_temp_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 403, in _build_with_temp_dir
self.run_setup()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 503, in run_setup
super().run_setup(setup_script=setup_script)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 4, in <module>
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 183, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 199, in run_commands
dist.run_commands()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands
self.run_command(cmd)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/command/bdist_wheel.py", line 398, in run
self.run_command("build")
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/command/build_ext.py", line 98, in run
_build_ext.run(self)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py", line 866, in build_extensions
build_ext.build_extensions(self)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 476, in build_extensions
self._build_extensions_serial()
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 502, in _build_extensions_serial
self.build_extension(ext)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/command/build_ext.py", line 263, in build_extension
_build_ext.build_extension(self, ext)
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 557, in build_extension
objects = self.compiler.compile(
^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py", line 670, in unix_wrap_ninja_compile
cuda_post_cflags = unix_cuda_flags(cuda_post_cflags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py", line 569, in unix_cuda_flags
cflags + _get_cuda_arch_flags(cflags))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/pip-build-env-43s_9fyc/overlay/lib/python3.12/site-packages/torch/utils/cpp_extension.py", line 1985, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
~~~~~~~~~^^^^
IndexError: list index out of range
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for fused_ssim
Failed to build fused_ssim
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (fused_ssim)
```
Looking at the code, it looks like it is possible for arch_list to be empty, which will explain why the code crashes here.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.30.4
Libc version: glibc-2.40
Python version: 3.12.6 (main, Sep 8 2024, 13:18:56) [GCC 14.2.1 20240805] (64-bit runtime)
Python platform: Linux-6.9.8-arch1-1-x86_64-with-glibc2.40
Is CUDA available: N/A
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 3600 6-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 66%
CPU max MHz: 4208.2031
CPU min MHz: 2200.0000
BogoMIPS: 7203.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] triton==3.0.0
[conda] Could not collect
cc @malfet @seemethere | module: build,triaged | low | Critical |
2,563,909,653 | flutter | [two_dimensional_scrollables] Support scaling like InteractiveViewer does. | ### Use case
In large tables there could be a need to zoom out a bit
There is an example for interactive viewer builder with similar functionality
https://codebrowser.dev/flutter/flutter/examples/api/lib/widgets/interactive_viewer/interactive_viewer.builder.0.dart.html
The only thing is to have it working I would have to reimplement almost all feature from `two_dimensional_scrollables` to make it usable.
@Piinks is there any chance to add scaling feature to `two_dimensional_scrollables`
### Proposal
Add scaling feature to `two_dimensional_scrollables` similar to interactive viewer. | c: new feature,framework,f: scrolling,package,c: proposal,P3,team-framework,triaged-framework,p: two_dimensional_scrollables | low | Minor |
2,563,917,910 | deno | Implement `node:fs`'s `lchmod` | And also the sync version. | feat,node compat | low | Minor |
2,564,028,148 | rust | Improve test output separation between failed test (better visual anchoring) | Currently test output upon failure seems to be a bit mushed together and it can be very difficult to tell which stdout/stderr belongs to which test failure. We might be able to improve visual anchor to help contributors navigate the sections. | C-enhancement,T-compiler,T-bootstrap,A-compiletest | low | Critical |
2,564,047,628 | rust | Tracking issue for release notes of #125175: Tracking Issue for PanicHookInfo::payload_as_str() |
This issue tracks the release notes text for #125175.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
```markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for PanicHookInfo::payload_as_str()](https://github.com/rust-lang/rust/issues/125175)
```
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
```markdown
```
cc @m-ou-se -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,A-panic,relnotes-tracking-issue | low | Minor |
2,564,086,383 | excalidraw | Request to Add "ə" Character Support in Excalifont | Hello,
I'm using the Excalidraw plugin in Obsidian and have noticed that the Excalifont does not support the "ə" character. This character is important for accurately representing certain words in my notes. Could it be possible to include it in an upcoming update?
Thank you for considering this request and for your ongoing support of the Excalidraw community.

| font | low | Minor |
2,564,115,923 | godot | DirectoryCreateDialog needs to be renamed | ### Tested versions
4.4 0f1c7e6
### Issue description
After #80473, DirectoryCreateDialog became a universal dialog for creating/duplication files and directories. It needs to be renamed to something different. Proposed names include:
- EditorCopyMoveDialog
- ???
Suggestions welcome.
### Steps to reproduce
https://github.com/godotengine/godot/blob/0f1c7e6b24deb67117010017671844831bc24858/editor/directory_create_dialog.h#L40
| enhancement,topic:editor,topic:codestyle | low | Minor |
2,564,146,147 | transformers | [Modular Transformers] Request for comments | Hello :wave:
Over the past few weeks, we've been merging PRs related to "Modular Transformers".
We explain why we chose to go that way in this [tweet](https://x.com/LysandreJik/status/1841505287879958730) and this [YouTube video](https://youtu.be/P-asaQVmA3o?si=aZIsaRCpGQYdS05L) at the PyTorch Conference by @ArthurZucker.
We detail how to use it, and have started putting in examples, in this [documentation page](https://huggingface.co/docs/transformers/modular_transformers). Finally, some models, like [GLM, are being contributed](https://github.com/huggingface/transformers/pull/33823) (by @Cyrilvallez) using the modular file.
We'd be eager to hear about your experience using the tool. This is very much experimental, it is brittle, but we'll work on having it be more flexible, and usable in a myriad of situations. Please do share your experience (positive or negative), thoughts, comments, we'd be eager to hear what could be improved in order to lower even more the bar to contributing new models.
| WIP,Modular | low | Major |
2,564,171,442 | pytorch | DDP deadlock ProcessGroupNCCL's watchdog got stuck | ### 🐛 Describe the bug
The process is working correctly with DDP world size 1 but then with world size > 1 is going to hang with GPU 0 at 0% and GPU 1 fixed to max occupancy. I've replicated this both with A100 and H100, with and without torch.compile.
I got this message on nightly:
```python
PG ID 0 PG GUID 0(default_pg) Rank 1] ProcessGroupNCCL's watchdog got stuck for 480 seconds without making progress in monitoring enqueued collectives. This typically indicates a NCCL/CUDA API (e.g., CudaEventDestroy) hang blocking the watchdog, and could be triggered by another thread holding the GIL inside a CUDA api (for example, CudaEventDestroy), or other deadlock-prone behaviors.If you suspect the watchdog is not actually stuck and a longer timeout would help, you can either increase the timeout (TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC) to a larger value or disable the heartbeat monitor (TORCH_NCCL_ENABLE_MONITORING=0).If either of aforementioned helps, feel free to file an issue to PyTorch about the short timeout or false positive abort; otherwise, please attempt to debug the hang.
```
### Versions
PyTorch version: 2.6.0.dev20241001+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Sep 30 2024, 18:08:57) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.1.100+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.41
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241001+cu124
[pip3] torchaudio==2.5.0.dev20241001+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.0.dev20241001+cu124
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241001+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241001+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241001+cu124 pypi_0 pypi
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,module: c10d,module: ddp | low | Critical |
2,564,199,549 | opencv | Test_Model.TextDetectionByDB test fails after new dnn engine integration | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
New DNN engine brings dedicated MatShape structure. Zero value in the new MatShape indicates empty shape independently from size and other values. The old ONNX parser sets zero values for symbolic shape with a hope to resolve it later. The logic is broken now and should be fixed with the new symbolic shape inference support.
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,564,211,861 | rust | Tracking issue for RFC 3695: Allow boolean literals as cfg predicates | This is a tracking issue for the RFC "Allow boolean literals as cfg predicates" ([rust-lang/rfcs/#3695](https://github.com/rust-lang/rfcs/pull/3695)).
The feature gate for the issue is `#![feature(cfg_boolean_literals)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
- [x] Accept an RFC.
- https://github.com/rust-lang/rfcs/pull/3695
- [x] Implement in nightly.
- https://github.com/rust-lang/rust/pull/131034
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
None.
### Related
- https://github.com/rust-lang/rust/pull/131034
| B-RFC-approved,T-lang,B-RFC-implemented,C-tracking-issue | low | Critical |
2,564,273,682 | react-native | [Android] Content jump in multiline `TextInput` when adding/removing line of text due to delayed layout update happening after content update | ### Description
When adding a new line of text in multiline `TextInput` component (e.g. after pressing <kbd>Enter</kbd>), there's a slight delay between updating the content of the text and updating the height of the component (i.e. layout) on Android with New Architecture enabled.
This causes a content jump (to the top and back to the bottom) visible for one or more frames.
Before adding newline (correct):
<img width="398" alt="Screenshot 2024-10-03 at 16 45 31" src="https://github.com/user-attachments/assets/4938b7cd-6839-4951-8c9e-9820de27ad5c">
After pressing "Enter" (incorrect frame):
<img width="398" alt="Screenshot 2024-10-03 at 16 45 39" src="https://github.com/user-attachments/assets/e1ae855d-d0bd-44c2-b890-2d7daf514a48">
After one or more frames (correct):
<img width="398" alt="Screenshot 2024-10-03 at 16 45 47" src="https://github.com/user-attachments/assets/dda69b7e-de61-49eb-9562-d9c4c17a9d78">
Video recording:
https://github.com/user-attachments/assets/c0e529dc-58f6-45f3-a782-e8d65eb23ab4
Note that there's also an issue with measuring the height of the last line of the text if it's empty but it's already fixed by @j-piasecki in https://github.com/facebook/react-native/pull/42331 as well as by @NickGerleman in https://github.com/facebook/react-native/pull/46613 but the issue with content jump still persists.
### Steps to reproduce
```tsx
import {StyleSheet, View, TextInput} from 'react-native';
import React from 'react';
export default function App() {
return (
<View style={styles.container}>
<TextInput multiline style={styles.input} />
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
alignItems: 'center',
},
input: {
fontSize: 20,
borderWidth: 1,
borderColor: 'black',
width: 300,
marginTop: 100,
padding: 0,
},
});
```
### React Native Version
0.76.0-rc.3
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0
CPU: (12) arm64 Apple M3 Pro
Memory: 99.83 MB / 18.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.19.0
path: ~/.nvm/versions/node/v18.19.0/bin/node
Yarn:
version: 3.6.4
path: ~/.nvm/versions/node/v18.19.0/bin/yarn
npm:
version: 10.2.3
path: ~/.nvm/versions/node/v18.19.0/bin/npm
Watchman:
version: 2024.09.23.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /Users/tomekzaw/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "30"
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.1
- 34.0.0
- 35.0.0
System Images:
- android-33 | Google APIs ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 2.7.8
path: /Users/tomekzaw/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0-alpha.2
wanted: 15.0.0-alpha.2
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.0-rc.3
wanted: 0.76.0-rc.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
No crash or failure
```
### Reproducer
https://github.com/tomekzaw/repro-new-arch-android-textinput
### Screenshots and Videos
https://github.com/user-attachments/assets/18ac93c6-bf15-4cac-b689-760097d75ff7 | Component: TextInput,Platform: Android,Partner,p: Software Mansion,Needs: Triage :mag:,Type: New Architecture | low | Critical |
2,564,333,818 | react | Bug: re-rendering order of onBlur and onClick inconsistent between desktop and mobile browsers | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 18.3.1
## Steps To Reproduce
1. Open https://stackblitz.com/edit/react-onblur-onclick-re-rendering
2. Click on the input field to focus on it
3. Change the value in the input field (e.g. press 3)
4. Input must be still have focus, then press the "Click Me" button
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
https://stackblitz.com/edit/react-onblur-onclick-re-rendering
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
On desktop the log looks like:
```
onChange
onBlur
updateState to 2
render App
render App
onClick 2
```
but on mobile:
```
[Log] onChange
[Log] onBlur
[Log] updateState to – "2"
[Log] onClick – 1
[Log] render App
[Log] render App
```
This issue is reproducible on an iPad Air (4th generation) with iOS 17.5.
## The expected behavior
On both desktop and mobile, clicking the button while focus is still on the input should trigger a re-render before onClick event.
| Status: Unconfirmed | low | Critical |
2,564,335,389 | PowerToys | New+ Dated Folders and Text Files | ### Description of the new feature / enhancement
Enhance the New+ tool as follows:
If we create a template folder or file with a name that includes e.g. yyyy_mm_dd then this will be replaced with the current date.
If we create a template text file containing e.g. dd/mm/yyyy then this is replaced with the current date in the file.
Other tags for time, etc.
### Scenario when this would be used?
It would make the already useful New+ much more powerful
### Supporting information
_No response_ | Idea-New PowerToy,Needs-Triage,Product-New+ | low | Minor |
2,564,355,089 | PowerToys | Korean Translation issue on What's new menu | ### Microsoft PowerToys version
0.85.0
### Utility with translation issue
Welcome / PowerToys Tour window
### 🌐 Language affected
Korean
### ❌ Actual phrase(s)
GitHub 리포지토리에 대한 자세한 릴리스 정보 보기
### ✔️ Expected phrase(s)
GitHub에서 자세한 릴리즈 노트 보기
### ℹ Why is the current translation wrong
It's original english phrase is See more detailed release notes on GitHub (In What's new menu)
The translation of actual phrase back into English is 'See more detailed information about a GitHub repository'. | Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation | low | Minor |
2,564,355,979 | material-ui | [RFC] Better consistency between Autocomplete and Select | ### What's the problem?
I'm exploring replacing some of our Select components with Autocomplete components (using the default [combo box](https://mui.com/material-ui/react-autocomplete/#combo-box) mode). My assumption was that these would look and act more or less the same as Select components, with the added ability to filter the list by typing on the keyboard. However, the components have several differences in appearance and behavior:
1. Autocomplete's popup list is non-modal (the screen can still scroll). Select's is modal.
2. Autocomplete closes its list if the user switches to another tab or window. Select's list stays open.
3. Popup positioning differs: Select's tries to center around the Select component but constrains to the boundaries of Select's container, while Autocomplete's is more flexible. (See the demo - the Select component's popup is left-aligned with the Select component, due to the contraints of the surrounding Box, while the Autocomplete's is not.)
4. In dark mode, Autocomplete's list's background is a darker color than Select's (because Select's Popover uses an `elevation` of 8, while Autocomplete gives its `Paper` an elevation of 2).
5. Autocomplete's list is always the width of the autocomplete element. Select's list uses the Select element as a minimum width but can auto-size based on the menu contents.
6. Select's list uses a transition. Autocomplete's list appears immediately.
7. Autocomplete's list uses word wrapping. Select's list cuts off long items.
8. Select's list takes up to the full height of the screen. Autocomplete's is constrained to above or below the autocomplete element and is limited in height (beyond what's required by fitting above or below the autocomplete element).
### What are the requirements?
Greater consistency and compatibility - avoid noticeable UX differences, unless there are clear reasons for them
### What are our options?
Issues 1-3 (differences in modal, focus handling, and positioning) are likely the result of Select's use of Menu and Popover versus Autocomplete's use of Popper. This may be related to #38756.
Items 4-5 and (to some extent) 6 can be handled by changes to props and components, although I personally would appreciate not needing to apply these customizations myself.
Item 8 seems like a desirable difference: the autocomplete element needs to stay visible so the user can see what they're typing, and limiting the height seems perfectly reasonable.
(There may be UX issues I haven't considered that argue against changing any or all of these.)
### Proposed solution
Some simple starting changes would include changing Autocomplete's Paper's elevation to 8 to match Popover and changing it to pass `minWidth` instead of `width` to the Popper's style.
### Resources and benchmarks
https://stackblitz.com/edit/react-3apdzs?file=Demo.tsx has a side-by-side comparison of Select and Autocomplete, including a demonstration of some props and component changes to make Autocomplete more like Select.
Customizing Autocomplete width has come up before: #19376
**Search keywords**: | component: autocomplete,RFC | low | Major |
2,564,387,054 | flutter | ListView.separated adds extra bottom spacing on iOS | ### Steps to reproduce
Use a ListView.separated:
```
ListView.separated(
shrinkWrap: true,
physics: const NeverScrollableScrollPhysics(),
itemBuilder: (_, index) => Text('item $index'),
separatorBuilder: (_, __) => SizedBox(height: 1),
itemCount: 3,
)
```
### Expected results
No space between "Item 2" and "Another text" (as it happens on Android):

### Actual results
An extra space appears between "Item 2" and "Another text":

### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
mainAxisSize: MainAxisSize.min,
children: [
Text('A text'),
ListView.separated(
shrinkWrap: true,
physics: const NeverScrollableScrollPhysics(),
itemBuilder: (_, index) => Text('item $index'),
separatorBuilder: (_, __) => SizedBox(height: 1),
itemCount: 3,
),
Text('Another text'),
],
)
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>


</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel stable, 3.19.5, on macOS 14.6.1 23G93 darwin-arm64, locale es-ES)
• Flutter version 3.19.5 on channel stable at /Users/carlos/fvm/versions/3.19.5
! Warning: `dart` on your path resolves to /opt/homebrew/Cellar/dart/2.19.3/libexec/bin/dart, which is not inside your current Flutter SDK checkout at /Users/carlos/fvm/versions/3.19.5. Consider adding /Users/carlos/fvm/versions/3.19.5/bin to the front of your
path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 300451adae (6 months ago), 2024-03-27 21:54:07 -0500
• Engine revision e76c956498
• Dart version 3.3.3
• DevTools version 2.31.1
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/carlos/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /opt/homebrew/Cellar/openjdk@17/17.0.9/libexec/openjdk.jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment Homebrew (build 17.0.9+0)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.13.0
[✗] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2.2)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.22855.32
[✓] Connected device (3 available)
• Aquaris X2 Pro (mobile) • XZ010466 • android-arm64 • Android 10 (API 29)
• iPhone de Barkibu (mobile) • 191cb0431649e222e13d469dbb5e5f50f9fe48ea • ios • iOS 16.7.10 20H350
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 3 categories.
```
</details>
| platform-ios,framework,f: scrolling,has reproducible steps,P2,workaround available,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Major |
2,564,397,487 | TypeScript | Assertion signature on generics doesn't narrow | ### 🔎 Search Terms
Assertion AND signature
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about Assertion Signatures, Generics.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?noUncheckedIndexedAccess=true&allowUnusedLabels=true&downlevelIteration=true&noEmitHelpers=true&noUnusedLocals=true&noUnusedParameters=true&preserveConstEnums=true&removeComments=true&importHelpers=true&target=99&useUnknownInCatchVariables=false&exactOptionalPropertyTypes=true&noImplicitOverride=true&noFallthroughCasesInSwitch=true&noPropertyAccessFromIndexSignature=true&inlineSourceMap=true&inlineSources=true&stripInternal=true&ts=5.6.2#code/FAehAICIHEHtYCbgIYGdUFMBOAXSAucAW2QGsNVwEKcsBXAYxzqwEsA7Ac3FUYYwwJQEAGZ12TVrHYp02HAAoGhcaXawA7uwCUhNJlyUG4AN7DwFi6xHgFAQgbbT5y6-A4AFlk3h2GDeAAKgCeAA4YAKJY3lgKAOQAgnK4UjIiyKwANoJx2gDcLpYAvuYl5pAAQshI+vIEVDT0TCwc3NgxwGISOKmyBjgAPADCAHxKhEO6ffJGzq7Wtg5OZm6Wnt4BfgEh4VEx8Un9velZOfnAriVlGAAeobC44F2S0u40CpkcFISotK0A2gBdKa-NhcIHOC6rcDZHBPcTKObQ1zsZBEDA-P5cArI1yfPyoTFgzhAnHQorgAA+vjomUy4AAvDS6QUoasRA9bAxpL8YV9wLAbPiKMs2biFgoALLITwAOiwyHYCFgRAUTgG4AADLKAKyi3HQ56MpEG5Go9GESAARw02EgABoxaa3MLCeB-sLAY7nZcyQaKRhMpgTT7argFM9zj7LNz2LyTOaMPa+QSKUznnlLGBwAMGXnwBEAEqFgDyhfAAAkixEnabXbLQnRUB4Pl8o9HpuHE+A7Pm4sgAEYMXJ+3ElZHj6ESyO11ZYDDMLBpBGy12j1zzxcyNdiopAA
### 💻 Code
```ts
// "Good assert": makes destructuring succeed
// function assert(c: unknown): asserts c {
// if (!c) {
// throw new TypeError('Assertion failed');
// }
// }
// "Bad assert": destructuring error
function assert<C>(c: C): asserts c {
if (!c) {
throw new TypeError('Assertion failed');
}
}
export function test(lines: string[]): string[] {
let func: {
name: string;
lines: string[];
} | null = null;
for (const line of lines) {
if (Math.random() < 0.5) {
func = {
name: "qwer",
lines: [line],
};
} else {
assert(func);
const {name, lines} = func; // <=== ERROR HERE: 'name' implicitly has type 'any'
lines.push(line);
assert(name !== 'abc');
}
}
if (func)
return func.lines;
return lines;
}
```
### 🙁 Actual behavior
When the generic-assert implementation is active, the line -
```ts
const {name, lines} = func;
```
errors with `'name' implicitly has type 'any'`.
When the `unknown` assert implementation is active, types are properly inferred.
### 🙂 Expected behavior
Both `assert` versions contain assertion signature, and both should be able to narrow the type.
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,564,404,176 | vscode | VSCode opens automatically when I plug in a mass storage device or click on "open with File Manager" (dolphin) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- 1.93
- OS Version:
- Kubuntu 23.10
Steps to Reproduce:
1. plug in msc
2. click on "open with file manager"
I think vscode helped itself to auto-handle some mime-type that I did not ask it to, which usually gets handled by dolphin. Please don't do this microsoft. undo it. really lame mcro d*** move. vscode is awesome, but disguisting to consider itself anywhere close to a sufficient replacement for dolphin. | bug,install-update,linux,confirmation-pending | low | Critical |
2,564,407,443 | PowerToys | Markdown Preview Blocks Pictures | ### Microsoft PowerToys version
0.84.1
### Installation method
WinGet
### Running as admin
None
### Area(s) with issue?
File Explorer: Preview Pane
### Steps to reproduce
Open a Markdown with picture links.
### ✔️ Expected Behavior
Show the formatted Markdown WITH linked pictures.
This was previously discussed in Issue #3713 in May of 2020 with few comments, no resolution, and then closed. I have been using this Preview feature since it arrived and only recently, since July, has this become a problem for me. Not sure what changed.
### ❌ Actual Behavior
The message "Some pictures have been blocked to help prevent the sender from identifying this computer. Open this item to view pictures." is shown at the top of the preview pane and a small icon with an "X" is shown where pictures are supposed to be shown
### Other Software
_No response_ | Issue-Bug,Product-File Explorer,Needs-Triage | low | Minor |
2,564,425,101 | material-ui | Material UI v6 + Remix + Vite running with “Prop `className` did not match” error | ### Steps to reproduce
Execute the following repository,
- https://github.com/onozaty/remix-prisma/tree/mui-v6
Steps:
1. Open with DevContainer
2. `npm run dev`
3. Open http://127.0.0.1:5173/ in your browser
### Current behavior
The following will appear in your browser's console
```
chunk-NUMECXU6.js?v=cbdf64e9:521 Warning: Prop `className` did not match. Server: "MuiPaper-root MuiPaper-elevation MuiPaper-elevation4 MuiAppBar-root MuiAppBar-colorPrimary MuiAppBar-positionStatic css-5iifl-MuiPaper-root-MuiAppBar-root" Client: "MuiPaper-root MuiPaper-elevation MuiPaper-elevation4 MuiAppBar-root MuiAppBar-colorPrimary MuiAppBar-positionStatic css-nt0iva-MuiPaper-root-MuiAppBar-root"
at header
at http://127.0.0.1:5173/node_modules/.vite/deps/chunk-56J7IAOU.js?v=cbdf64e9:528:45
at Paper2 (http://127.0.0.1:5173/node_modules/.vite/deps/@mui_material.js?v=cbdf64e9:1478:17)
at http://127.0.0.1:5173/node_modules/.vite/deps/chunk-56J7IAOU.js?v=cbdf64e9:528:45
at AppBar2 (http://127.0.0.1:5173/node_modules/.vite/deps/@mui_material.js?v=cbdf64e9:4366:17)
at div
at http://127.0.0.1:5173/node_modules/.vite/deps/chunk-56J7IAOU.js?v=cbdf64e9:528:45
at Box3 (http://127.0.0.1:5173/node_modules/.vite/deps/chunk-POD3YRTN.js?v=cbdf64e9:3025:19)
at div
at http://127.0.0.1:5173/node_modules/.vite/deps/chunk-56J7IAOU.js?v=cbdf64e9:528:45
at Container3 (http://127.0.0.1:5173/node_modules/.vite/deps/chunk-POD3YRTN.js?v=cbdf64e9:4495:19)
at App
at body
at html
at Layout (http://127.0.0.1:5173/app/root.tsx:9:3)
at RenderedRoute (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:4279:5)
at RenderErrorBoundary (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:4239:5)
at DataRoutes (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:5270:5)
at Router (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:4624:15)
at RouterProvider (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:5085:5)
at RemixErrorBoundary (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:7146:5)
at RemixBrowser (http://127.0.0.1:5173/node_modules/.vite/deps/@remix-run_react.js?v=cbdf64e9:8692:46)
at DefaultPropsProvider (http://127.0.0.1:5173/node_modules/.vite/deps/chunk-POD3YRTN.js?v=cbdf64e9:3404:3)
at RtlProvider (http://127.0.0.1:5173/node_modules/.vite/deps/chunk-POD3YRTN.js?v=cbdf64e9:3380:3)
at ThemeProvider (http://127.0.0.1:5173/node_modules/.vite/deps/chunk-POD3YRTN.js?v=cbdf64e9:3336:5)
at ThemeProvider2 (http://127.0.0.1:5173/node_modules/.vite/deps/chunk-POD3YRTN.js?v=cbdf64e9:3487:5)
at ThemeProviderNoVars (http://127.0.0.1:5173/node_modules/.vite/deps/@mui_material.js?v=cbdf64e9:858:10)
at ThemeProvider (http://127.0.0.1:5173/node_modules/.vite/deps/@mui_material.js?v=cbdf64e9:940:3)
at MuiProvider (http://127.0.0.1:5173/app/mui.provider.tsx:12:3)
```

### Expected behavior
Expect no errors in the browser console.
### Context
This did not occur with Material UI v5.
The code for v5 is as follows.
- https://github.com/onozaty/remix-prisma/tree/mui-v5
When I upgraded from Material UI v5 to v6, I initially got the following error with `npm run dev`.
```
3:39:56 PM [vite] Error when evaluating SSR module virtual:remix/server-build: failed to import "@mui/icons-material"
|- /workspaces/remix-prisma/node_modules/@mui/icons-material/esm/index.js:1
export { default as Abc } from './Abc';
^^^^^^
SyntaxError: Unexpected token 'export'
at wrapSafe (node:internal/modules/cjs/loader:1281:20)
at Module._compile (node:internal/modules/cjs/loader:1321:27)
at Module._extensions..js (node:internal/modules/cjs/loader:1416:10)
at Module.load (node:internal/modules/cjs/loader:1208:32)
at Module._load (node:internal/modules/cjs/loader:1024:12)
at cjsLoader (node:internal/modules/esm/translators:348:17)
at ModuleWrap.<anonymous> (node:internal/modules/esm/translators:297:7)
at ModuleJob.run (node:internal/modules/esm/module_job:222:25)
at async ModuleLoader.import (node:internal/modules/esm/loader:316:24)
at async nodeImport (file:///workspaces/remix-prisma/node_modules/vite/dist/node/chunks/dep-CzJTQ5q7.js:53536:15)
at async ssrImport (file:///workspaces/remix-prisma/node_modules/vite/dist/node/chunks/dep-CzJTQ5q7.js:53392:16)
at async eval (/workspaces/remix-prisma/app/routes/customers+/index.tsx:5:31)
at async instantiateModule (file:///workspaces/remix-prisma/node_modules/vite/dist/node/chunks/dep-CzJTQ5q7.js:53451:5)
```
To avoid this, the following settings are added in `vite.config.ts`.
```ts
ssr: {
noExternal: [/^@mui\//],
},
```
Now the screen itself can be displayed, but there is an error in the browser console.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Linux 5.15 Debian GNU/Linux 12 (bookworm) 12 (bookworm)
Binaries:
Node: 20.15.1 - /usr/local/bin/node
npm: 10.7.0 - /usr/local/bin/npm
pnpm: Not Found
Browsers:
Chrome: Not Found
npmPackages:
@emotion/react: ^11.13.3 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/core-downloads-tracker: 6.1.2
@mui/icons-material: ^6.1.2 => 6.1.2
@mui/material: ^6.1.2 => 6.1.2
@mui/private-theming: 6.1.2
@mui/styled-engine: 6.1.2
@mui/system: 6.1.2
@mui/types: 7.2.17
@mui/utils: 6.1.2
@types/react: ^18.2.20 => 18.3.3
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.1.6 => 5.5.3
```
</details>
**Search keywords**: Remix Vite | bug 🐛,package: icons,package: material-ui | medium | Critical |
2,564,430,966 | vscode | Notebook Diff editor changes | With the introduction of a collapsible input section in diff view, we now have 3 collapsible sections - Input, Metadata, Output
With all of these sections, we have pretty much the same code to calculate heights of each section.
We also have 3 editors per cell, 1 in each section (input editor, metadata editor, output json editor)
* Most of this can be simplified to make it common
* We might want to ensure some of the layout calculations can be unit tested as well
E.g. The calculation of height can be validated in unit tests
Right now there are no tests to verify this, hence if height calculations are incorrect there's a lot of shifting of ui
* Unit tests to ensure the layout doesn't change more than once
I.e. when loading a cell diff for input and metadata the height should be determine in one sweep.
E.g. if calculation is wrong, then editor is painted the actual height will be different from original calculated, hence there's a re-layout, causing shifting in ui...
| debt,notebook-diff | low | Minor |
2,564,431,238 | neovim | LSP: lsp.completion with completeopt=noinsert,menuone aggressively inserts text | ### Problem
I try lsp.completion, and it is almost unusable for me because it aggressively insert text to my buffer instead of letting me choose it first.
### Steps to reproduce using "nvim -u minimal_init.lua"
* Install pyright
```sh
npm i -g pyright
```
* Create minimal_init.lua in the current directory
```sh
touch minimal_init.lua
```
Add these lines to `minimal_init.lua`
```lua
vim.api.nvim_create_autocmd('FileType', {
-- This handler will fire when the buffer's 'filetype' is "python"
pattern = 'python',
callback = function(args)
vim.lsp.start({
name = 'pyright',
cmd = { 'pyright-langserver', '--stdio' },
-- Set the "root directory" to the parent directory of the file in the
-- current buffer (`args.buf`) that contains either a "setup.py" or a
-- "pyproject.toml" file. Files that share a root directory will reuse
-- the connection to the same LSP server.
root_dir = vim.fs.root(args.buf, { 'setup.py', 'pyproject.toml' }),
})
end,
})
vim.api.nvim_create_autocmd('LspAttach', {
callback = function(args)
local client = vim.lsp.get_client_by_id(args.data.client_id)
if client.supports_method('textDocument/completion') and vim.lsp.completion then
-- Enable auto-completion
vim.lsp.completion.enable(true, client.id, args.buf, { autotrigger = true })
end
end,
})
```
Run Neovim with minimal_init
```sh
nvim -u minimal_init.lua
```
Create a python file (for example `main.py`) and write a line
```python
print("
```
You will see what I saw right after you type the quotation mark
### Expected behavior
I want it to just show the suggestions list and let me choose from one of them instead of directly insert the word to my buffer
### Nvim version (nvim -v)
Neovim v0.11.0-dev-888+g4075e613b
### Language server name/version
pyright 1.1.383
### Operating system/version
Ubuntu 24.04.01
### Log file
_No response_ | bug,lsp,completion | low | Major |
2,564,436,544 | pytorch | Not decomposing CompositeImplicitAutograd ops in post-dispatch export changes autocast (and possibly more) behavior when compared with eager mode | Let’s assume we have a CompositeImplicitAutograd op foo that calls torch.prod. If we have the following function:
```py
def f(x):
with torch.amp.autocast(“cpu”):
return torch.ops.aten.foo(x)
```
then if we do post-dispatch export:
- if we don’t preserve aten.foo, then prod will get autocast behavior, resulting a graph that looks like the following:
```py
def graph(x):
y = x.to(float32)
z = at::prod(y)
return z
```
- If we do preserve aten.foo, then the prod inside it will never get the autocast behavior! If x is lower precision (float16), we don’t end up upcasting for prod, then our model becomes numerically unstable.
```py
def graph(x):
# Note that there is no upcast to float32
y = torch.ops.aten.foo(x)
return y
```
We can probably reproduce this for other DispatchKey-based subsystems as well.
## Potential solutions
Solution 1: If we want to be really correct about these things, then in the export graph there should be an autocast HOP, like the following:
```py
def graph(x):
y = run_with_autocast_state(torch.ops.aten.foo, x)
return y
```
`run_with_autocast_state` turns on torch.amp.autocast while running the operator -- when this graph gets executed, then `foo` will decompose into `prod`, which will then have its input upcasted.
Solution 2: If autocast is enabled, require the user give us an autocast registration for any compositeimplicitautograd ops that we don't want to decompose.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Minor |
2,564,444,881 | pytorch | Device transfer for NJT within torch.compile allocates a new nested int | Note that this differs from eager behavior for `njt.to(device)`, where the nested int from the input shape is purposefully includes in the output shape, as expected.
Repro:
```python
import torch
def f(nt):
return nt.to(device="cpu")
compiled_f = torch.compile(f)
nt = torch.nested.nested_tensor([
torch.randn(2, 5),
torch.randn(3, 5),
torch.randn(4, 5),
], layout=torch.jagged, device="cuda")
out = f(nt)
out_compile = compiled_f(nt)
# fails; torch.Size([7, j1]) for out and torch.Size([7, j2]) for out_compile
assert out.shape == out_compile.shape
```
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @ezyang @chauhang @penguinwu | triaged,module: nestedtensor,oncall: pt2 | low | Minor |
2,564,448,739 | PowerToys | [Workspaces] WinSCP not loading at the correct location | ### Microsoft PowerToys version
0.85.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
Install https://winscp.net/eng/download.php
Save a workspace with the WinSCP window
Open WinSCP manually, move the windows to another location and close it to make windows save that location.
Launch the workspace.
### ✔️ Expected Behavior
WinSCP launches in the workspace saved location.
### ❌ Actual Behavior
WinSCP launches in the windows saved location.
### Other Software
Win 11 23H2 | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,564,451,977 | godot | Unable to Export for NativeAOT on Android | ### Tested versions
- Reproducible in v4.3.stable.mono
### System information
Windows 11
### Issue description
**The Problem**
I've been testing my game on various mobile devices with high end and low end targets. On one of the lower end targets (Google Pixel 4a) a cold startup takes about ~26 seconds! This is an unacceptably slow startup time considering my game's binary is under 250mb. On empty Godot projects the startup time is excellent, and I had a hard time believing the assets being loaded was adding all that extra time.
**The Source**
Long story short it seems to be a long standing issue that the dotnet runtime on Android has a slow startup time because of `JIT` compilation and other various factors. This is something that's been hotly debated with projects such as Xamarin and its successor Maui. One solution to this is native ahead of time compilation. Starting with `.NET 8,` there's experimental support for Android.
**The Bug**
As of `4.3,` Godot [doesn't support](https://godotengine.org/article/platform-state-in-csharp-for-godot-4-2/) `NativeAOT` out of the box for Android. Just mono. `iOS` also has experimental support within the dotnet runtime, but that is supported in Godot (or at least that's the default functionality since `NativeAOT` is required on `iOS`).
To enable `NativeAOT` all one must do is use the `<PublishAOT>` tag in the `.csproj` file. The problem is, to make it work properly, other tags are needed that are overridden by Godot during the export process which is the bug. On such example is the `RuntimeIdenitifer` tag. `linux-bionic-arm64` is the currently used runtime identifier for `NativeAOT` but Godot always overrides it to `android-arm64` even when it's specified in the `.csproj` file. So, theoretically Godot can support `NativeAOT` on android with this limitation lifted.
### Steps to reproduce
1. Create project with C# script.
2. Export for android with `NativeAOT` enabled.
3. Build fails.
### Minimal reproduction project (MRP)
N/A | enhancement,platform:android,topic:dotnet,topic:export | low | Critical |
2,564,484,327 | vscode | Let compound launch configs be the default | Type: <b>Bug</b>
We have a launch.json file as shown below. When we launch either of "App: All" entries, the VSCode debugging system turns on but none of the apps are actually launched. We can launch them individually, but we cannot get them to launch together anymore. This did work previously.
Side note: we added "App: All" to the configurations array too because the compounds entries are sorted to the bottom of the launch dropdown, and we want "App: All" to be listed first to more easily run the app. It would be nice if compounds appeared first, or if there was a way to specify a default entry.
```
{
"version": "0.2.0",
"compounds": [
{
"name": "App: All",
"configurations": [
"App: Backend",
"App: Client"
],
"stopAll": true
}
],
"configurations": [
{
"type": "node",
"name": "App: All",
"request": "launch"
},
{
"name": "App: Backend",
"type": "node-terminal",
"request": "launch",
"command": "npm start",
"cwd": "${workspaceFolder}/backend"
},
{
"name": "App: Client",
"request": "launch",
"type": "node-terminal",
"command": "npm start",
"cwd": "${workspaceFolder}/client"
}
]
}
```
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 x 2803)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.73GB (4.83GB free)|
|Process Argv|--crash-reporter-id 51aedaeb-e98c-425f-81a2-71e6e25a3e29|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (12)</summary>
Extension|Author (truncated)|Version
---|---|---
azurite|Azu|3.32.0
vscode-github-actions|git|0.27.0
vscode-azureappservice|ms-|0.25.3
vscode-azurefunctions|ms-|1.15.4
vscode-azureresourcegroups|ms-|0.9.6
vscode-azurestaticwebapps|ms-|0.12.2
vscode-dotnet-runtime|ms-|2.1.7
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.2
azure-account|ms-|0.12.0
powershell|ms-|2024.2.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
da93g388:31013173
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
cf971741:31144450
autoexpandse:31146404
iacca2:31144504
5fd0e150:31146321
```
</details>
<!-- generated by issue reporter --> | feature-request,debug | low | Critical |
2,564,504,746 | godot | Bone editor does not have the ability to set rest directly | ### Tested versions
Reproducible in v4.3.stable.official
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (Intel Corporation; 31.0.101.2125) - Intel(R) Core(TM) i5-10300H CPU @ 2.50GHz (8 Threads)
### Issue description
I was using a game template that I think was probably from an older version of Godot, and I noticed that the Errors console was being flooded with an error reading "invert: condition "det == 0" is true". After a little research, I found out I needed to fix the resting positions of some bones that had been incorrectly configured. However, when I tried, I found that the options to do so were grayed out.

### Steps to reproduce
1. Open the MRP.
2. In gobot.tscn, click on the Skeleton3D node under Armature.
3. Click on the FootL or FootR bones.
4. Attempt to edit the values under Rest. If the issue is with the model like I can only assume it is, they should be uneditable.
### Minimal reproduction project (MRP)
[Let Me Edit My Bones You Fuck.zip](https://github.com/user-attachments/files/17247439/Let.Me.Edit.My.Bones.You.Fuck.zip)
| topic:editor,documentation,topic:3d | low | Critical |
2,564,520,194 | flutter | Android: Platform views with image is not rendering properly when applying padding | ### Steps to reproduce
1. Open the attached project
2. Run the android
3. Click the floating action button
### Expected results
Padding need to apply smoothly like iOS
### Actual results
Platform view not rendering smoothly
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:io';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() {
runApp(const MyApp());
}
const String viewType = '<native_view>';
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
var padding = EdgeInsets.zero;
final Map<String, dynamic> creationParams = <String, dynamic>{};
Widget getNativeView() {
if (Platform.isIOS) {
return UiKitView(
viewType: viewType,
layoutDirection: TextDirection.ltr,
creationParams: creationParams,
creationParamsCodec: const StandardMessageCodec(),
);
}
return AndroidView(
viewType: viewType,
layoutDirection: TextDirection.ltr,
creationParams: creationParams,
creationParamsCodec: const StandardMessageCodec(),
);
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text("Native view Example"),
),
body: AnimatedContainer(
duration: kThemeChangeDuration,
curve: Curves.linear,
padding: padding,
alignment: Alignment.center,
child: getNativeView(),
),
floatingActionButton: Builder(builder: (builderContext) {
return FloatingActionButton(
onPressed: () async {
setState(() {
if (padding != EdgeInsets.zero) {
padding = EdgeInsets.zero;
return;
}
padding = const EdgeInsets.all(30);
});
},
);
}),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/8ba3dac5-aa93-4661-837c-7d4c44cdf7cc
</details>
<details open>
<summary>Project</summary>
[native_view_project-2.zip](https://github.com/user-attachments/files/17247572/native_view_project-2.zip)
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
~/Playground/thefuseapp-mobile-client/android git:[Performace-Issue]
flutter doctor -v
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-IN)
• Flutter version 3.24.3 on channel stable at /Users/bala-12858/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (3 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/bala-12858/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (6 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 14 (API 34) (emulator)
• iPhone 16 Pro (mobile) • 316C5626-7797-44C4-AADF-A9C5362A94A2 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-0 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.71
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-android,a: animation,a: platform-views,has reproducible steps,P2,team-android,triaged-android,found in release: 3.24,found in release: 3.26 | low | Minor |
2,564,567,631 | neovim | :profile can "stream" to its file eagerly instead of batched | ### Problem
I wish there were a proper way to debug non-recoverable hangs.
The current profile command works for only recoverable hangs. What do I mean by it ?
I recently had a hang that seemed quite long. I could reproduce it , and do `profile start` before it happens. However, in case the hang was infinite (which happened in other cases) , there is really no way to debug. Because you can't run `profile stop` and no file is generated.
### Expected behavior
Could there be a command that generates a stream of function calls (something that I guess profile uses internally) ?
So that if at certain point there is a freeze, you can see the last calls.
Thanks | enhancement,performance,vimscript,lua | low | Critical |
2,564,607,019 | TypeScript | Improve module resolution using "paths" of "tsconfig.json" file | ### 🔍 Search Terms
Today the "paths" property only accepts the asterisk as a wildcard character, which means it is necessary to keep redeclaring other modules when they are in subdirectories, example:
```json
{
"compilerOptions": {
"paths": {
"libs/async": ["libs/async/src"],
"libs/async/*": ["libs/async/src/*"],
"libs/crypto": ["libs/crypto/src"],
"libs/crypto/*": ["libs/crypto/src/*"],
"libs/database": ["libs/database/src"],
"libs/database/*": ["libs/database/src/*"],
"libs/domain": ["libs/domain/src"],
"libs/domain/*": ["libs/domain/src/*"],
"libs/logger": ["libs/logger/src"],
"libs/logger/*": ["libs/logger/src/*"],
}
}
}
```
Wouldn't it be interesting to accept other characters as it happens with regex? Example:
```json
{
"compilerOptions": {
"paths": {
"libs/$1": ["libs/$1/src"],
"libs/$1/*": ["libs/$1/src/*"],
}
}
}
```
This would make TypeScript resolution more dynamic, making it easier to set up monorepos.
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Add more wildcards to "paths" property in tsconfig.json, like dynamic directory names (`$0` for example), or create a way to use JS/TS to build and return the tsconfig object, with this I can trace the workspace folder extracting all directores needed.
### 📃 Motivating Example
Normally i'm use only one or 2 paths in tsconfig but I had started to work with NestJS using workspaces and each new app or library need to be added into a new element
### 💻 Use Cases
1. What do you want to use this for?
Any project using workspaces and having lots of sub packages
2. What shortcomings exist with current approaches?
Today is required to declare every directory manually every time when a new app or lib are created in NestJS workspace
3. What workarounds are you using in the meantime?
Today i need to make every change manually...
| Suggestion,Awaiting More Feedback | low | Minor |
2,564,634,454 | pytorch | Docs are little bit outdated for torch logs | ### 📚 The doc issue
At the end of setup here I assume the output should be some log describing the program rather than saying that the device does not support torch.compile.
https://pytorch.org/tutorials/recipes/torch_logs.html#setup
### Suggest a potential alternative/fix
Run the program in the tutorial and update the doc :)
cc @ezyang @chauhang @penguinwu | good first issue,module: logging,triaged,oncall: pt2 | low | Major |
2,564,741,708 | go | net: support Linux vsock sockets | ### Proposal Details
There exists basic support for `AF_VSOCK` sockets implemented in #19434.
Could support for such sockets please be added to the net module? For example `net.FileListener` can operate on TCP and UNIX sockets, but fails on vsock sockets. | NeedsInvestigation | low | Minor |
2,564,806,663 | electron | [Feature Request]: Add ability to register IPC Main Listeners that do not consume the event object | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
When writing ipcMain handlers, I often find myself wishing to discard the event (especially for handlers that serve another function and might be called from other functions from the backend.
### Proposed Solution
I suggest that the handlers described should have overloads added that do not require the event to be specified, eliminating this use case entirely. This would prevent duplication of parameters (as described in solution 1), non-type specific anonymous functions (solution 2), or the need for helper functions (solution 3).
### Alternatives Considered
This leads me to one of the following user-side solutions:
```typescript
ipcMain.on('channel', (_e: unknown, ...args: Parameters<typeof myHandlerFunction>)
: Promise<void> => myHandlerFunction(...args));`
```
```typescript
ipcMain.on('channel', (_e: unknown, arg1: string, arg2: string)
: Promise<void> => myHandlerFunction(arg1, arg2));
```
```typescript
/**
* Accepts a function and arguments from IPC, discards the event argument sends the rest to the function with the specified type.
* @param handler
* @returns callable to handle IPC event
*/
export default function handleEvent<T extends (...args: Parameters<T>) => ReturnType<T>>(handler: T) {
return (_e: unknown, ...args: Parameters<T>): ReturnType<T> => handler(...args);
}
ipcMain.on('channel', handleEvent(myHandlerFunction));
```
### Additional Information
_No response_ | enhancement :sparkles: | low | Minor |
2,564,813,275 | kubernetes | `/proxy/stats/summary` returning incorrect PVC capacity bytes | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately see https://github.com/kubernetes/kube-state-metrics/blob/main/SECURITY.md
-->
**What happened**:
When querying `kubectl get --raw /api/v1/nodes/<node-ip>/proxy/stats/summary`, the capacity bytes returned for PVCs are not what are shown in tools like k9s which therefore tell me that tooling built around this endpoint will show incorrect values.
**What you expected to happen**:
PVC capacity and other metrics show the correct sizes
**How to reproduce it (as minimally and precisely as possible)**:
```bash
$ kubectl get --raw "/api/v1/nodes/ip-<>.ec2.internal/proxy/stats/summary"
...
{
"time": "2024-09-30T18:27:06Z",
"availableBytes": 467653439488,
"capacityBytes": 1623168045056,
"usedBytes": 1155497828352,
"inodesFree": 100516333,
"inodes": 100663296,
"inodesUsed": 146963,
"name": "pvc-data",
"pvcRef": {
"name": "name-pvc",
"namespace": "namespace-pvc"
}
},
...
```
Take capacity bytes, convert to GiB/MiB, compare with k9s or other tooling for `spec.resources.requests.storage`
I used: `kubectl get pvc name-pvc -o json` to get the spec requests to compare against (which was larger by 125 MiB):
```
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "9600Gi"
}
},
```
**Anything else we need to know?**:
**Environment**:
* kube-state-metrics version:
* Kubernetes version (use `kubectl version`): v1.27.14
* Cloud provider or hardware configuration: AWS EKS
* Other info:
| kind/bug,sig/storage,triage/needs-information,triage/accepted | low | Critical |
2,564,817,429 | vscode | git: Staging a selection stages entire block | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.0
- OS Version: Windows 10
Steps to Reproduce:
1. Make a multiline change to a file in a git repo
2. Goto `Source Control` view and click the file under `Changes`
3. Observe that the actions in between the diff editors are `-> Revert Block` and `+ Stage Block`
4. Now select a subset of the change and observe the actions change to `-> Revert Selection` and `+ Stage selection`
5. Click `+ Stage selection`
6. The entire change/block gets staged instead of just the selected lines
Weirdly enough, it only happens for some changes in some files | bug,git,diff-editor | low | Critical |
2,564,823,808 | vscode | Cannot set working directory when opening a new terminal | Type: <b>Bug</b>
When using WSL in a multi-root workspace, the prompt appears to ask for the working directory, however the terminal opens in the home directory of the WSL VM.
Steps to replicate:
1. Set up a WSL remote in VSCode
2. Set up a multi-root workspace (e.g. workspace/folder1 and workspace/folder2)
3. Click on Terminal -> New Terminal
4. In the dropdown that appears, try to select a working directory
Expected result:
Terminal opens in the select working directory
Actual result:
Terminal opens in the home directory


VS Code version: Code 1.94.0 (d78a74bcdfad14d5d3b1b782f87255d802b57511, 2024-10-02T13:08:12.626Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Remote OS version: Linux x64 5.15.153.1-microsoft-standard-WSL2
Remote OS version: Linux x64 5.15.153.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 x 2304)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.71GB (39.07GB free)|
|Process Argv|--crash-reporter-id f0e14b48-154b-4bf9-8893-d28f8e8425e7|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Debian|
|OS|Linux x64 5.15.153.1-microsoft-standard-WSL2|
|CPUs|Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 x 0)|
|Memory (System)|31.20GB (28.62GB free)|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Debian|
|OS|Linux x64 5.15.153.1-microsoft-standard-WSL2|
|CPUs|Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 x 0)|
|Memory (System)|31.20GB (28.62GB free)|
|VM|0%|
</details><details><summary>Extensions (16)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-neovim|asv|1.18.12
remote-ssh|ms-|0.114.3
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.4
remote-explorer|ms-|0.4.3
githistory|don|0.6.20
xml|Dot|2.5.1
go|gol|0.42.1
rainbow-csv|mec|3.12.0
vscode-docker|ms-|1.29.3
makefile-tools|ms-|0.11.13
sqltools|mtx|0.28.3
sqltools-driver-pg|mtx|0.5.4
vscode-yaml|red|1.15.0
twinny|rjm|3.17.20
shellcheck|tim|0.37.1
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
g316j359:31013175
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
autoexpandse:31146404
iacca2:31150323
5fd0e150:31146321
```
</details>
<!-- generated by issue reporter --> | bug,WSL,terminal-process | low | Critical |
2,564,850,845 | rust | ICE: each unstable `LintExpectationId` must have a matching stable id | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
On 1.81.0 stable and 1.82.0-beta.5 I get an ICE when `#[expect()]`ing various Clippy lints on a struct field. Interestingly, I could repro the ICE for `clippy::unreadable_literal`, `clippy::unusual_byte_groupings`, and `clippy::unseparated_literal_suffix` but not `clippy::identity_op`. I could not repro on `1.83.0-nightly (80d82ca22 2024-09-27)`, only on stable and beta.
### Code
```Rust
#![allow(unused)]
#![warn(clippy::pedantic, clippy::unseparated_literal_suffix)]
struct Test {
field: u64
}
fn main() {
let _ = Test {
#[expect(
// clippy::unreadable_literal,
// clippy::identity_op,
clippy::unusual_byte_groupings,
// clippy::unseparated_literal_suffix,
)]
field: 0x1234123412_341234u64 + 0,
};
}
```
Playground link: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=216f4208339c66578039fc1ba22f6264
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
```
rustc 1.82.0-beta.5 (6a3b69c6b 2024-09-27)
binary: rustc
commit-hash: 6a3b69c6b0529151da5fb4568961519a80adccf9
commit-date: 2024-09-27
host: x86_64-unknown-linux-gnu
release: 1.82.0-beta.5
LLVM version: 19.1.0
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_errors/src/diagnostic.rs:373:18:
each unstable `LintExpectationId` must have a matching stable id
stack backtrace:
0: 0x718e5c74e3e5 - std::backtrace_rs::backtrace::libunwind::trace::h649ab3318d3445c5
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x718e5c74e3e5 - std::backtrace_rs::backtrace::trace_unsynchronized::hf4bb60c3387150c3
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x718e5c74e3e5 - std::sys::backtrace::_print_fmt::hd9186c800e44bd00
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:65:5
3: 0x718e5c74e3e5 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h1b9dad2a88e955ff
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:40:26
4: 0x718e5c79deeb - core::fmt::rt::Argument::fmt::h351a7824f737a6a0
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/fmt/rt.rs:173:76
5: 0x718e5c79deeb - core::fmt::write::h4b5a1270214bc4a7
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/fmt/mod.rs:1182:21
6: 0x718e5c742f6f - std::io::Write::write_fmt::hd04af345a50c312d
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/io/mod.rs:1827:15
7: 0x718e5c750bd1 - std::sys::backtrace::BacktraceLock::print::h68d41b51481bce5c
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:43:9
8: 0x718e5c750bd1 - std::panicking::default_hook::{{closure}}::h96ab15e9936be7ed
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:269:22
9: 0x718e5c7508ac - std::panicking::default_hook::h3cacb9c27561ad33
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:296:9
10: 0x718e58e93420 - std[1f2242ed6435445e]::panicking::update_hook::<alloc[7b1462a1eb55c293]::boxed::Box<rustc_driver_impl[8683aa37472b7dde]::install_ice_hook::{closure#0}>>::{closure#0}
11: 0x718e5c75159f - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::hce7569f4ca5d1b64
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2084:9
12: 0x718e5c75159f - std::panicking::rust_panic_with_hook::hfe205f6954b2c97b
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:808:13
13: 0x718e5c7511c7 - std::panicking::begin_panic_handler::{{closure}}::h6cb44b3a50f28c44
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:674:13
14: 0x718e5c74e8a9 - std::sys::backtrace::__rust_end_short_backtrace::hf1c1f2a92799bb0e
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/backtrace.rs:168:18
15: 0x718e5c750e54 - rust_begin_unwind
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:665:5
16: 0x718e5c79a4a3 - core::panicking::panic_fmt::h3d8fc78294164da7
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:74:14
17: 0x718e5c79a2fb - core::panicking::panic_display::h1c0e44fa90890272
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:264:5
18: 0x718e5c79a2fb - core::option::expect_failed::h3a757a693188cc6e
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/option.rs:2030:5
19: 0x718e5bd38529 - <rustc_errors[48f1c32ead9577fc]::diagnostic::DiagInner>::update_unstable_expectation_id.cold
20: 0x718e5b3bbcc3 - <rustc_errors[48f1c32ead9577fc]::DiagCtxtHandle>::update_unstable_expectation_id
21: 0x718e5a62bd49 - rustc_lint[2400c89cc40e73c2]::levels::lint_expectations
22: 0x718e5b29bba2 - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::lint_expectations::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 8usize]>>
23: 0x718e5b299e2a - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::SingleCache<rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, false>
24: 0x718e5b4826de - rustc_query_impl[3625cc0592f96219]::query_impl::lint_expectations::get_query_non_incr::__rust_end_short_backtrace
25: 0x718e5b4823db - rustc_lint[2400c89cc40e73c2]::expect::check_expectations
26: 0x718e5b482397 - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::check_expectations::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 0usize]>>
27: 0x718e5b4821d0 - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::DefaultCache<core[3cad2706d8bdcdc4]::option::Option<rustc_span[28a649581f99a5bd]::symbol::Symbol>, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, false>
28: 0x718e5b481f99 - rustc_query_impl[3625cc0592f96219]::query_impl::check_expectations::get_query_non_incr::__rust_end_short_backtrace
29: 0x718e5ac11560 - rustc_interface[53a414ae04dc6ffb]::passes::analysis
30: 0x718e5ac105e7 - rustc_query_impl[3625cc0592f96219]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[3625cc0592f96219]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>
31: 0x718e5b29ada2 - rustc_query_system[200ca28aa7d9732c]::query::plumbing::try_execute_query::<rustc_query_impl[3625cc0592f96219]::DynamicConfig<rustc_query_system[200ca28aa7d9732c]::query::caches::SingleCache<rustc_middle[ba2289ab3ae064d4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[3625cc0592f96219]::plumbing::QueryCtxt, false>
32: 0x718e5b29ab4f - rustc_query_impl[3625cc0592f96219]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
33: 0x718e5b12c256 - rustc_interface[53a414ae04dc6ffb]::interface::run_compiler::<core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>, rustc_driver_impl[8683aa37472b7dde]::run_compiler::{closure#0}>::{closure#1}
34: 0x718e5b07395b - std[1f2242ed6435445e]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[53a414ae04dc6ffb]::util::run_in_thread_with_globals<rustc_interface[53a414ae04dc6ffb]::interface::run_compiler<core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>, rustc_driver_impl[8683aa37472b7dde]::run_compiler::{closure#0}>::{closure#1}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>
35: 0x718e5b07372a - <<std[1f2242ed6435445e]::thread::Builder>::spawn_unchecked_<rustc_interface[53a414ae04dc6ffb]::util::run_in_thread_with_globals<rustc_interface[53a414ae04dc6ffb]::interface::run_compiler<core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>, rustc_driver_impl[8683aa37472b7dde]::run_compiler::{closure#0}>::{closure#1}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[3cad2706d8bdcdc4]::result::Result<(), rustc_span[28a649581f99a5bd]::ErrorGuaranteed>>::{closure#1} as core[3cad2706d8bdcdc4]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
36: 0x718e5c75b5fb - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::ha1963004222e7822
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
37: 0x718e5c75b5fb - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h1086ced1f7c494c2
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
38: 0x718e5c75b5fb - std::sys::pal::unix::thread::Thread::new::thread_start::ha8af9c992ef0b208
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/pal/unix/thread.rs:108:17
39: 0x718e559a1a94 - <unknown>
40: 0x718e55a2ea34 - clone
41: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust-clippy/issues/new?template=ice.yml
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C codegen-units=1 -C debuginfo=2
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [lint_expectations] computing `#[expect]`ed lints in this crate
#1 [check_expectations] checking lint expectations (RFC 2383)
end of query stack
note: Clippy version: clippy 0.1.81 (eeb90cd 2024-09-04)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_errors/src/diagnostic.rs:373:18:
each unstable `LintExpectationId` must have a matching stable id
stack backtrace:
0: rust_begin_unwind
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/panicking.rs:665:5
1: core::panicking::panic_fmt
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:74:14
2: core::panicking::panic_display
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/panicking.rs:264:5
3: core::option::expect_failed
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/core/src/option.rs:2030:5
4: <rustc_errors::diagnostic::DiagInner>::update_unstable_expectation_id.cold
5: <rustc_errors::DiagCtxtHandle>::update_unstable_expectation_id
6: rustc_lint::levels::lint_expectations
[... omitted 1 frame ...]
7: rustc_lint::expect::check_expectations
[... omitted 1 frame ...]
8: rustc_interface::passes::analysis
[... omitted 1 frame ...]
9: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust-clippy/issues/new?template=ice.yml
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [lint_expectations] computing `#[expect]`ed lints in this crate
#1 [check_expectations] checking lint expectations (RFC 2383)
#2 [analysis] running analysis passes on this crate
end of query stack
note: Clippy version: clippy 0.1.81 (eeb90cd 2024-09-04)
```
</p>
</details>
| I-ICE,T-compiler,C-bug,S-needs-repro,S-needs-info | low | Critical |
2,564,887,674 | flutter | [Cupertino] Support Linear Progress View style | ### Use case
iOS `ProgressView` supports both circular and linear progress styles.
https://developer.apple.com/documentation/swiftui/progressview


### Proposal
[CupertinoActivityIndicator](https://api.flutter.dev/flutter/cupertino/CupertinoActivityIndicator-class.html) currently on supports circular progress view style.
<img width="346" alt="image" src="https://github.com/user-attachments/assets/e8d37574-e5e4-4aeb-b60c-33af1c116eab">
Add linear progress view style | c: new feature,framework,a: fidelity,f: cupertino,c: proposal,P2,team-design,triaged-design | low | Minor |
2,564,901,041 | godot | Some debugger tabs break editor UI at minimal window width | ### Tested versions
v4.4.dev.custom_build [5ccbf6e4c]
### System information
Godot v4.4.dev (5ccbf6e4c) - macOS 14.5.0 - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M1 Max (Apple7) - Apple M1 Max (10 threads)
### Issue description
Profiler, Visual Profiler and Network Profiler make the window content clip:
https://github.com/user-attachments/assets/02004b50-9ef2-4999-b2b4-d1137e119f78
### Steps to reproduce
Open Profiler/Visual Profiler/Network Profiler and resize the window
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:gui | low | Critical |
2,564,941,897 | flutter | Re-add `--suppress-analytics` to the `gen_dartcli_call` GN template | Blocked on https://github.com/dart-lang/sdk/issues/56842. | dependency: dart,P2,blocked,c: tech-debt,team-engine,triaged-engine,dependency:dart-triaged | low | Minor |
2,564,944,641 | rust | Support "zippered" macOS binaries | Dynamic libraries on macOS can be built in a way that allows loading them when running under Mac Catalyst as well. This is similar in spirit to [universal binaries](https://developer.apple.com/documentation/apple-silicon/building-a-universal-macos-binary), but technologically different in that only a single binary and architecture is actually built, which allows for a lot of code sharing.
This can be used in Xcode [when creating XCFramework bundles](https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle), and is underpinned by the the [`-darwin-target-variant`](https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-darwin-target-variant) flag in Clang - but that's the only official docs I could find on it. I did find [this note](https://github.com/nico/hack/blob/be6e1a6885a9d5179558b37a0b4c36bec9c4d377/notes/catalyst.md#building-a-zippered-dylib) though that explains the details really well.
If we wanted to support this in `rustc`, there's two questions that need answering:
- How does the user activate it? A separate triple? An extra commandline flag?
- How does the user detect this mode of compilation in their code? `cfg(all(target_os = "macos", target_abi = "macabi"))`? `target_abi = "zippered"`?
Opening this issue to have a place to refer back to, I'm undecided whether it's worth the effort to try to support, would like to see a concrete use-case first (if you know of one, please comment below!)
@rustbot label O-ios O-macos C-feature-request A-targets
| O-macos,O-ios,T-compiler,C-feature-request,A-targets | low | Minor |
2,564,953,305 | go | cmd/internal/testdir: Test/fixedbugs/issue44732.go failures | ```
#!watchflakes
default <- pkg == "cmd/internal/testdir" && test == "Test/fixedbugs/issue44732.go"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735081729835003585)):
=== RUN Test/fixedbugs/issue44732.go
=== PAUSE Test/fixedbugs/issue44732.go
=== CONT Test/fixedbugs/issue44732.go
testdir_test.go:147: exit status 1
go: error obtaining buildID for go tool compile: signal: segmentation fault (core dumped)
--- FAIL: Test/fixedbugs/issue44732.go (2.57s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,564,984,730 | flutter | Remove packages not intended to be user facing from api.flutter.dev | Some packages listed on the left side of api.flutter.dev are not intended to be user facing so the current positioning is misleading. From a quick skim at least boolean_selector, source_span, stack_trace , string_scanner, term_glyph, test_api are packages which are explicitly developed by us for our own tooling. We don't block external use, but we don't allocate resources to supporting external use. There may be other packages that aren't helpful for flutter users to have promoted.
My guess is most of these are pulled because of transitive flutter deps.
Can we filter these packages? | framework,d: api docs,c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,565,010,149 | flutter | Local engine `gen_snapshot` path is incorrect, gen_snapshot_arm64 not found | This is breaking running with a locally built engine in profile.
## reproduction steps
```
flutter --local-engine=ios_profile --local-engine-host=host_profile_arm64 --local-engine-src-path=/Users/aaclarke/dev/engine/src/ run --profile
```
## seen results
```
Launching lib/main.dart on iPhone (5) in profile mode...
Warning: Missing build name (CFBundleShortVersionString).
Warning: Missing build number (CFBundleVersion).
Action Required: You must set a build name and number in the pubspec.yaml file version field before submitting to the App Store.
Automatically signing iOS for device deployment using specified development team in Xcode project: S8QB4VV633
Running Xcode build...
Xcode build done. 21.4s
Failed to build iOS app
Could not build the precompiled application for the device.
Error (Xcode): Target aot_assembly_profile failed: ProcessException: Failed to find
"/Users/aaclarke/dev/engine/src/out/ios_profile/clang_x64/gen_snapshot_arm64" in the search path.
```
## expected results
I believe the corret path should be `/Users/aaclarke/dev/engine/src/out/ios_profile/clang_arm64/gen_snapshot`
## doctor
```
[!] Flutter (Channel [user-branch], 3.26.0-1.0.pre.350, on macOS 14.7 23H124 darwin-arm64, locale en)
! Flutter version 3.26.0-1.0.pre.350 on channel [user-branch] at /Users/aaclarke/dev/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision 59e57437db (61 minutes ago), 2024-10-03 13:09:19 -0700
• Engine revision de1762dbc5
• Dart version 3.6.0 (build 3.6.0-316.0.dev)
• DevTools version 2.40.0-dev.2
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform
update checks and upgrades.
```
| engine,P2,team-ios,triaged-ios | low | Critical |
2,565,037,290 | go | cmd/compile: segmentation fault on trivial tests (openbsd-ppc64) | ```
#!watchflakes
default <- pkg == "cmd/internal/testdir" && test == "Test/ken/robfor.go"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735081729835003585)):
=== RUN Test/ken/robfor.go
=== PAUSE Test/ken/robfor.go
=== CONT Test/ken/robfor.go
testdir_test.go:147: signal: segmentation fault (core dumped)
--- FAIL: Test/ken/robfor.go (0.90s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,565,065,786 | rust | private_interfaces reports the visability of items in pseudocode | ### Code
```rs
#![allow(dead_code)]
mod a {
pub mod b {
pub(in crate::a) struct Huh;
pub fn eee() -> Huh { Huh }
}
}
```
### Current output
```
warning: type `Huh` is more private than the item `eee`
--> src/lib.rs:6:9
|
6 | pub fn eee() -> Huh { Huh }
| ^^^^^^^^^^^^^^^^^^^ function `eee` is reachable at visibility `pub(crate)`
|
note: but type `Huh` is only usable at visibility `pub(a)`
--> src/lib.rs:5:9
|
5 | pub(in crate::a) struct Huh;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
= note: `#[warn(private_interfaces)]` on by default
```
### Desired output
```
warning: type `Huh` is more private than the item `eee`
--> src/lib.rs:6:9
|
6 | pub fn eee() -> Huh { Huh }
| ^^^^^^^^^^^^^^^^^^^ function `eee` is reachable at visibility `pub(crate)`
|
note: but type `Huh` is only usable at visibility `pub(in crate::a)`
--> src/lib.rs:5:9
|
5 | pub(in crate::a) struct Huh;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
= note: `#[warn(private_interfaces)]` on by default
```
### Rationale and extra context
`pub(a)` is not valid rust code in any edition.
### Other cases
_No response_
### Rust Version
rustc 1.83.0-nightly (2bd1e894e 2024-09-26)
binary: rustc
commit-hash: 2bd1e894efde3b6be857ad345914a3b1cea51def
commit-date: 2024-09-26
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
### Anything else?
_No response_ | A-diagnostics,T-compiler,D-papercut | low | Minor |
2,565,104,187 | go | net: TestAllocs failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestAllocs"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735065407922958673)):
=== RUN TestAllocs
udpsock_test.go:540: write udp4 127.0.0.1:53085->127.0.0.1:53085: sendto: no buffer space available
--- FAIL: TestAllocs (0.40s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,565,151,250 | flutter | Add Tajik Language (tg_TJ) Support to Flutter Localization (l10n) | ### Use case
As a developer, I want to create a Flutter application that supports multiple languages, including Tajik, so that I can provide a localized experience for Tajik-speaking users.
User Story:
As a Tajik-speaking user, I should be able to interact with mobile applications in my native language, making it easier to navigate, understand, and use the app’s features.
Currently, Flutter does not natively support Tajik in its flutter_localizations package, limiting developers' ability to create fully localized apps for Tajik-speaking communities.
Problem:
The lack of Tajik language support in Flutter localization forces developers to rely on custom implementations or external libraries, which may not be as robust or future-proof as native Flutter localization support.
Proposed Solution:
Implement and integrate the Tajik language (ISO code: tg) into the flutter_localizations package.
Add Tajik translations for date, time, and number formatting, as well as standard UI elements such as buttons, dialogs, alerts, etc.
Provide Tajik language support for Material components and widgets where localization is required.
Refs:
https://api.flutter.dev/flutter/flutter_localizations/GlobalMaterialLocalizations-class.html
https://github.com/flutter/flutter/blob/master/packages/flutter_localizations/lib/src/l10n/README.md
### Proposal
We propose adding support for the Tajik language (ISO 639-1 code: tg) to the Flutter localization (flutter_localizations) package. Tajik is a variant of Persian spoken by over 9 million people primarily in Tajikistan and surrounding regions. Adding support for Tajik would enable developers to create localized Flutter applications that cater to Tajik-speaking users, ensuring wider accessibility and inclusivity in the global market.
| c: new feature,tool,framework,a: internationalization,c: proposal,P2,team-framework,triaged-framework | low | Major |
2,565,153,967 | go | net: TestIPv6WriteMsgUDPAddrPortTargetAddrIPVersion failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestIPv6WriteMsgUDPAddrPortTargetAddrIPVersion"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735065407922958673)):
=== RUN TestIPv6WriteMsgUDPAddrPortTargetAddrIPVersion
udpsock_test.go:691: write udp [::]:57774->127.0.0.1:12345: sendmsg: no buffer space available
--- FAIL: TestIPv6WriteMsgUDPAddrPortTargetAddrIPVersion (0.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,565,154,294 | flutter | [iOS] Pressing `´` immediately deletes the character on iPad Air, iOS 17.5 with Portuguese (Portugal) keyboard | ### Steps to reproduce
1. Run the sample app in the iPad Air 13-inch emulator with iOS 17.5 with a Portuguese (Portugal) keyboard
2. Press `´` and then `a`
### Expected results
`Á` is inserted
### Actual results
`A` is inserted
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() {
runApp(
const MaterialApp(
home: Scaffold(
body: TextInputExample(),
),
),
);
}
class TextInputExample extends StatefulWidget {
const TextInputExample({super.key});
@override
State<TextInputExample> createState() => _TextInputExampleState();
}
class _TextInputExampleState extends State<TextInputExample> implements DeltaTextInputClient {
final FocusNode _focusNode = FocusNode();
TextInputConnection? _inputConnection;
TextEditingValue _currentEditingValue = const TextEditingValue(
text: '',
selection: TextSelection.collapsed(offset: 0),
);
@override
void initState() {
super.initState();
WidgetsBinding.instance.addPostFrameCallback((timeStamp) {
_attach();
});
}
@override
void dispose() {
_focusNode.dispose();
super.dispose();
}
@override
void updateEditingValueWithDeltas(List<TextEditingDelta> textEditingDeltas) {
TextEditingValue newEditingValue = _currentEditingValue;
for (final delta in textEditingDeltas) {
print(delta);
newEditingValue = delta.apply(newEditingValue);
}
setState(() {
_currentEditingValue = newEditingValue;
});
}
void _attach() {
if (_inputConnection != null) {
return;
}
_inputConnection = TextInput.attach(
this,
const TextInputConfiguration(
enableDeltaModel: true,
inputType: TextInputType.multiline,
textCapitalization: TextCapitalization.sentences,
inputAction: TextInputAction.newline,
keyboardAppearance: Brightness.light,
),
);
_inputConnection!.show();
_inputConnection!.setEditingState(_currentEditingValue);
}
@override
Widget build(BuildContext context) {
return Center(
child: Focus(
focusNode: _focusNode,
autofocus: true,
child: DecoratedBox(
decoration: BoxDecoration(border: Border.all()),
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Text(
_currentEditingValue.text,
style: const TextStyle(
fontSize: 24,
),
),
),
),
),
);
}
@override
void connectionClosed() {}
@override
AutofillScope? get currentAutofillScope => null;
@override
TextEditingValue? get currentTextEditingValue => null;
@override
void insertTextPlaceholder(Size size) {}
@override
void performPrivateCommand(String action, Map<String, dynamic> data) {}
@override
void removeTextPlaceholder() {}
@override
void showAutocorrectionPromptRect(int start, int end) {}
@override
void showToolbar() {}
@override
void updateEditingValue(TextEditingValue value) {}
@override
void updateFloatingCursor(RawFloatingCursorPoint point) {}
@override
void didChangeInputControl(TextInputControl? oldControl, TextInputControl? newControl) {}
@override
void insertContent(KeyboardInsertedContent content) {}
@override
void performSelector(String selectorName) {}
@override
void performAction(TextInputAction action) {}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/ab0f1e2b-fb27-4ecf-9075-0e03f645e9bc
</details>
### Logs
<details open><summary>Logs</summary>
We can see that, right after the insertion delta we receive a deletion delta.
```console
flutter: TextEditingDeltaInsertion#aeba0(oldText, textInserted: ´, insertionOffset: 0, selection: TextSelection.collapsed(offset: 1, affinity: TextAffinity.upstream, isDirectional: false), composing: TextRange(start: 0, end: 1))
flutter: TextEditingDeltaDeletion#34156(oldText: ´, textDeleted: ´, deletedRange: TextRange(start: 0, end: 1), selection: TextSelection.collapsed(offset: 0, affinity: TextAffinity.upstream, isDirectional: false), composing: TextRange(start: -1, end: -1))
flutter: TextEditingDeltaInsertion#b979a(oldText, textInserted: A, insertionOffset: 0, selection: TextSelection.collapsed(offset: 1, affinity: TextAffinity.downstream, isDirectional: false), composing: TextRange(start: -1, end: -1))
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ flutter doctor -v
[✓] Flutter (Channel master, 3.25.0-1.0.pre.153, on macOS 15.0 24A335 darwin-arm64, locale en-BR)
• Flutter version 3.25.0-1.0.pre.153 on channel master at /Users/angelosilvestre/dev/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17f63272a0 (5 weeks ago), 2024-08-27 19:30:17 +0000
• Engine revision 7d751acc81
• Dart version 3.6.0 (build 3.6.0-175.0.dev)
• DevTools version 2.39.0-dev.15
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/angelosilvestre/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (4 available)
• iPad Air 13-inch (M2) (mobile) • 9D3D13AF-EA1C-47A8-AADF-DEBF72D2C99E • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A335 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A335 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.90
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| a: text input,platform-ios,a: tablet,a: internationalization,e: OS-version specific,has reproducible steps,P3,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.26 | low | Major |
2,565,160,909 | flutter | [iOS] Portuguese (Portugal) compound character entry creates extra character | ### Steps to reproduce
1. Run the sample app in the iPad Air 13-inch emulator with iOS 17.5 with a Portuguese (Portugal) keyboard
2. Press `´` and then `a`
### Expected results
`Á` is inserted
### Actual results
`´á` is inserted
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(
const MaterialApp(
home: Scaffold(
body: TextInputExample(),
),
),
);
}
class TextInputExample extends StatelessWidget {
const TextInputExample({super.key});
@override
Widget build(BuildContext context) {
return const Center(
child: Padding(
padding: EdgeInsets.all(8.0),
child: TextField(
autofocus: true,
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/9fe51c8e-298a-419e-aef2-09d098d57a3c
</details>
### Logs
<details open><summary>Logs</summary>
No relevant logs.
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ flutter doctor -v
[✓] Flutter (Channel master, 3.25.0-1.0.pre.153, on macOS 15.0 24A335 darwin-arm64, locale en-BR)
• Flutter version 3.25.0-1.0.pre.153 on channel master at /Users/angelosilvestre/dev/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17f63272a0 (5 weeks ago), 2024-08-27 19:30:17 +0000
• Engine revision 7d751acc81
• Dart version 3.6.0 (build 3.6.0-175.0.dev)
• DevTools version 2.39.0-dev.15
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/angelosilvestre/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (4 available)
• iPad Air 13-inch (M2) (mobile) • 9D3D13AF-EA1C-47A8-AADF-DEBF72D2C99E • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A335 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A335 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.90
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| a: text input,platform-ios,framework,a: tablet,a: internationalization,has reproducible steps,P3,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.26 | low | Minor |
2,565,165,702 | angular | Standalone components not respecting NO_ERRORS_SCHEMA or stubbed component | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
According to the [Nested Component Tests](https://angular.dev/guide/testing/components-scenarios#nested-component-tests) documentation, we should be able to use `schemas: [NO_ERRORS_SCHEMA]` or a stubbed component, but I'm seeing that the nested component is compiled and available on the DOM.
*Steps to repro*
1. `npm install`
2. `ng test`
*Expected result*
Tests pass
*Actual result*
Nested component (`app-child` in repo example) is compiled and therefore its injected services need to be mocked, so it's throwing an error `No provider for HttpClient`
### Please provide a link to a minimal reproduction of the bug
https://github.com/lcecil-uw/standalone-tests
### Please provide the exception or error you saw
NullInjectorError: R3InjectorError(DynamicTestModule)[ApiService -> HttpClient -> HttpClient]:
NullInjectorError: No provider for HttpClient!
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.2.7
Node: 20.15.0
Package Manager: npm 10.7.0
OS: win32 x64
Angular: 18.2.7
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.7
@angular-devkit/build-angular 18.2.7
@angular-devkit/core 18.2.7
@angular-devkit/schematics 18.2.7
@schematics/angular 18.2.7
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
### Anything else?
_No response_ | area: testing,area: docs | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.