id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,579,369,669 | rust | False positive for path statement with no effect; removing path would hide bug | ### Code
```rust
pub trait SameSize {
const CHECK_SIZE: ();
}
impl<Src: Sized, Dst: Sized> SameSize for (Src, Dst) {
const CHECK_SIZE: () = assert!(size_of::<Src>() == size_of::<Dst>());
}
pub fn cast_ptr<Src, Dst>(ptr: *const Src) -> *const Dst
where
(Src, Dst): SameSize,
{
// Force the size check to be evaluated at compile time
<(Src, Dst) as SameSize>::CHECK_SIZE;
ptr.cast()
}
fn main() {
let ptr: *const u32 = &1;
let _works = cast_ptr::<_, core::num::NonZero<u32>>(ptr);
let _fails = cast_ptr::<_, u64>(ptr); // Compile-time error
}
```
### Current output
```
Compiling playground v0.0.1 (/playground)
warning: path statement with no effect
--> src/main.rs:14:5
|
14 | <(Src, Dst) as SameSize>::CHECK_SIZE;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(path_statements)]` on by default
error[E0080]: evaluation of `<(u32, u64) as SameSize>::CHECK_SIZE` failed
--> src/main.rs:6:28
|
6 | const CHECK_SIZE: () = assert!(size_of::<Src>() == size_of::<Dst>());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: size_of::<Src>() == size_of::<Dst>()', src/main.rs:6:28
|
= note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/main.rs:14:5
|
14 | <(Src, Dst) as SameSize>::CHECK_SIZE;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: the above error was encountered while instantiating `fn cast_ptr::<u32, u64>`
--> src/main.rs:24:18
|
24 | let _fails = cast_ptr::<_, u64>(ptr); // Compile-time error
| ^^^^^^^^^^^^^^^^^^^^^^^
For more information about this error, try `rustc --explain E0080`.
warning: `playground` (bin "playground") generated 1 warning
error: could not compile `playground` (bin "playground") due to 1 previous error; 1 warning emitted
```
### Desired output
```
Compiling playground v0.0.1 (/playground)
error[E0080]: evaluation of `<(u32, u64) as SameSize>::CHECK_SIZE` failed
--> src/main.rs:6:28
|
6 | const CHECK_SIZE: () = assert!(size_of::<Src>() == size_of::<Dst>());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: size_of::<Src>() == size_of::<Dst>()', src/main.rs:6:28
|
= note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/main.rs:14:5
|
14 | <(Src, Dst) as SameSize>::CHECK_SIZE;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: the above error was encountered while instantiating `fn cast_ptr::<u32, u64>`
--> src/main.rs:24:18
|
24 | let _fails = cast_ptr::<_, u64>(ptr); // Compile-time error
| ^^^^^^^^^^^^^^^^^^^^^^^
For more information about this error, try `rustc --explain E0080`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Rationale and extra context
The code here checks that the source pointee and destination pointee have the same size. Thus the `_works` line should be fine as `u32` and `NonZero<u32>` have the same size, but the `_fails` line should cause a compile-time error as `u32` and `u64` have different sizes.
The path statement `<(Src, Dst) as SameSize>::CHECK_SIZE;` is required for the constant `CHECK_SIZE` to be checked, otherwise there never is a check at compile time, and the `_fails` line would fail to catch a mismatch. Removing the path statement would cause the compilation to erroneously not fail.
### Other cases
_No response_
### Rust Version
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,579,406,075 | pytorch | NJT + Flex Attention | # Issues
I'm working on this support now in #136792. Here's a list of the current issues:
1. The [notebook](https://www.internalfb.com/intern/anp/view/?id=5581056) (internal only) demonstrates the usage of a `(1, 1, sum(seqlen), sum(seqlen))` block mask for NJT. Is this inefficient? I'd expect `(1, 1, max_seqlen, max_seqlen)` as an analogue to what is done for dense, but it's a bit tricky to implement the NJT adapter for this.
* From offline discussion: if `_compile=True` is used for `create_block_mask()`, the full `(1, 1, sum(seqlen), sum(seqlen))` mask_tensor isn't materialized; this is good and recommended
* Still some exploration to be done to see if this is the most efficient way to handle NJTs
2. #137255 adds some logic that assumes a constant seqlen. This will have to be hacked around some for NJT.
3. The notebook example builds a `seq_idx` to map an index within `sum(seqlen)` -> the associated beginning offset. It doesn't account for the fact that `Q_LEN` / `KV_LEN` are rounded up to the nearest block size multiple, so out-of-bounds access occurs if `sum(seqlen)` is not a multiple of the block size.
* Turns out this is a real bug in `create_block_mask()`; should do the mod trick to avoid this (#137801)
5. ~~`create_block_mask(..., _compile=True)` throws `torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder torch.Tensor` (investigating)~~
* Fixed this by changing the NJT wrapper generator to close over `seq_idx` implicitly instead of explicitly.
6. The way `seq_idx` is built assumes `offsets[0] == 0`. This may not be the case for some non-standard NJT views. | triaged,module: nestedtensor,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,579,426,345 | deno | deno install --prod | I have a npm monorepo:
- package.json
- deno.lock
- apps/sveltekit-front
- package.json
- apps/deno-server
- package.json
- packages/shared
- package.json
sveltekit-front only has devDependencies because it is built as a static website
deno-server only has "normal" dependencies, no devDependencies.
sveltekit-front has the shared package in devDependencies
deno-server has the shared package in dependencies
I use docker to deploy both apps.
To create the deno-server docker image:
I copy the root package.json, lockfile, apps/deno-server and packages/shared then I run `deno install` inside the root directory (I cannot just run `deno install` inside apps/deno-server because I am in a monorepo, I need the packages/shared symlink)
The problem is that `deno install` installs all dependencies, even devDependencies that are not needed for production.
Maybe I am missing something.
Thanks
| feat,install | low | Minor |
2,579,434,101 | Python | Wildcard pattern matching with FFT in O((n+m) log (n+m)) | ### Feature description
[Reference](https://www.informatika.bg/resources/StringMatchingWithFFT.pdf) | enhancement | medium | Minor |
2,579,460,445 | PowerToys | TEXT EXTRACTOR OCR NOT WORKING | ### Microsoft PowerToys version
v0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
TextExtractor
### Steps to reproduce
WIN KEY + SHIFT + T=TEXT EXTRACTOR
### โ๏ธ Expected Behavior
WIN KEY + SHIFT + T=TEXT EXTRACTOR AUTOMATICALLY OPEN
### โ Actual Behavior
WIN KEY + SHIFT + T=TEXT EXTRACTOR DO [PowerToysReport_2024-10-10-23-09-38.zip](https://github.com/user-attachments/files/17332051/PowerToysReport_2024-10-10-23-09-38.zip)
NOT OPEN AUTOMATIC AND NO THING HAPPEN. I AM USING WINDOWS 11. TEXT EXTRACTOR OCR NOT WORKING. JUST CLICK ON IT AND POWERTOYS AUTOMATIC EXIT. BUT NOT IN OTHER OPTION LIKE RESIZE, MEASURING TOOL ETC....
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,579,491,539 | bitcoin | Consider making 27.x Long-Term Support (LTS) | As mentioned in the patch notes for 28.x, since this release drops support for glibc older than 2.31, it will not run on several widely used Linux distros, including RHEL 8 and the RHELatives that base off of this, such as AlmaLinux 8 and Rocky Linux 8.
In case anyone is unaware, RHEL 8 and its derivatives are still in active support, and will in fact not be EOL until 2029. While I don't have exact install numbers, I would not be surprised if more than half of Bitcoin Core installations that run on dedicated Linux servers will not be able to update to 28.x as a result of this, without first updating to a new OS.
As such, while I'm aware that older release branches usually have important updates backported for some time, I feel it would make sense to "officially" designate 27.x as a LTS with an extended window of backported updates - maybe not for the full four years and change until these OSes are EOL, but at least for a good part of it. Otherwise, considering the new security disclosure policy, I fear there will be a lot of clients with published exploits still running when 27.x advisories start getting published. | Feature | low | Major |
2,579,527,949 | langchain | KeyError '*' in parse_result when using `with_structured_output` on ChatAnthropic | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, ConfigDict, Field
from langchain_core.messages import SystemMessage, HumanMessage
class RequestAssessmentResponse(BaseModel):
"""
Respond to the reviewer's feedback with an assessment of the requested changes.
"""
model_config = ConfigDict(title="request_assessment")
request_for_changes: bool = Field(description="Set to True if the reviewer requested changes; otherwise, False.")
justification: str = Field(description="Justify why you think it's a change request.")
model = ChatAnthropic(model="claude-3-5-sonnet-20240620").with_structured_output(RequestAssessmentResponse, method="json_schema")
model.invoke([SystemMessage("Your goal is to determine whether the comment is a direct request for changes or not"), HumanMessage("Change the name of the file foo.py.")])
```
### Error Message and Stack Trace (if applicable)
```
KeyError Traceback (most recent call last)
Cell In[1], line 17
13 justification: str = Field(description="Justify why you think it's a change request.")
16 model = ChatAnthropic(model="claude-3-5-sonnet-20240620").with_structured_output(RequestAssessmentResponse, method="json_schema")
---> 17 model.invoke([SystemMessage("Your goal is to determine whether the comment is a direct request for changes or not"), HumanMessage("Change the name of the file foo.py.")])
File ~/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:3024, in RunnableSequence.invoke(self, input, config, **kwargs)
3022 input = context.run(step.invoke, input, config, **kwargs)
3023 else:
-> 3024 input = context.run(step.invoke, input, config)
3025 # finish the root run
3026 except BaseException as e:
File ~/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py:193, in BaseOutputParser.invoke(self, input, config, **kwargs)
186 def invoke(
187 self,
188 input: Union[str, BaseMessage],
189 config: Optional[RunnableConfig] = None,
190 **kwargs: Any,
191 ) -> T:
192 if isinstance(input, BaseMessage):
--> 193 return self._call_with_config(
194 lambda inner_input: self.parse_result(
195 [ChatGeneration(message=inner_input)]
196 ),
197 input,
198 config,
199 run_type="parser",
200 )
201 else:
202 return self._call_with_config(
203 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
204 input,
205 config,
206 run_type="parser",
207 )
File ~/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:1927, in Runnable._call_with_config(self, func, input, config, run_type, serialized, **kwargs)
1923 context = copy_context()
1924 context.run(_set_config_context, child_config)
1925 output = cast(
1926 Output,
-> 1927 context.run(
1928 call_func_with_variable_args, # type: ignore[arg-type]
1929 func, # type: ignore[arg-type]
1930 input, # type: ignore[arg-type]
1931 config,
1932 run_manager,
1933 **kwargs,
1934 ),
1935 )
1936 except BaseException as e:
1937 run_manager.on_chain_error(e)
File ~/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py:396, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
394 if run_manager is not None and accepts_run_manager(func):
395 kwargs["run_manager"] = run_manager
--> 396 return func(input, **kwargs)
File ~/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py:194, in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
186 def invoke(
187 self,
188 input: Union[str, BaseMessage],
189 config: Optional[RunnableConfig] = None,
190 **kwargs: Any,
191 ) -> T:
192 if isinstance(input, BaseMessage):
193 return self._call_with_config(
--> 194 lambda inner_input: self.parse_result(
195 [ChatGeneration(message=inner_input)]
196 ),
197 input,
198 config,
199 run_type="parser",
200 )
201 else:
202 return self._call_with_config(
203 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
204 input,
205 config,
206 run_type="parser",
207 )
File ~/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/openai_tools.py:293, in PydanticToolsParser.parse_result(self, result, partial)
288 msg = (
289 f"Tool arguments must be specified as a dict, received: "
290 f"{res['args']}"
291 )
292 raise ValueError(msg)
--> 293 pydantic_objects.append(name_dict[res["type"]](**res["args"]))
294 except (ValidationError, ValueError) as e:
295 if partial:
KeyError: 'request_assessment'
```
### Description
When i use Pydantic models with `model_config = ConfigDict(title="request_assessment")` the exception `KeyError: 'request_assessment'` is raised when i use `ChatAnthropic`. With `ChatOpenAI`, no problems.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #45-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 30 12:02:04 UTC 2024
> Python Version: 3.12.7 (main, Oct 1 2024, 22:28:49) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.133
> langchain_anthropic: 0.2.3
> langchain_chroma: 0.1.4
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.35
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic: 0.36.0
> async-timeout: Installed. No version info available.
> chromadb: 0.5.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.112.2
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.0
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.32
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
``` | ๐ค:bug,investigate | low | Critical |
2,579,554,439 | rust | ICE: `invalid immediate for given destination place: value ScalarPair .. does not match ABI` | <!--
[31mICE[0m: Rustc ./a.rs '-Zmir-opt-level=5 -Zvalidate-mir -Zcrate-attr=feature(non_lifetime_binders) -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: compiler/rustc_const_eval/src/interpret/operand.rs:157:17: invalid immediate for given destination place: value ScalarPair(alloc5<imm>, 0x0000000000000001) does not match ABI Scalar(Initialized { value: Pointer(AddressSpace(0)), valid_range: 1..=18446744073709551615 }))', 'error: internal compiler error: compiler/rustc_const_eval/src/interpret/operand.rs:157:17: invalid immediate for given destination place: value ScalarPair(alloc5<imm>, 0x0000000000000001) does not match ABI Scalar(Initialized { value: Pointer(AddressSpace(0)), valid_range: 1..=18446744073709551615 }))'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
#![feature(non_lifetime_binders)]
fn Brick()
where
for<T> T: Copy,
{
let mut foo: Option<Box<_>> = Some(Box::new(8));
let f = move || {
println!("'{}'", foo.unwrap());
};
f();
}
````
original:
````rust
#![feature(attr_literals)]
fn Brick() where for<T> T: Copy {
let mut foo: Option<Box<_>> = Some(Box::new(8));
let f = move|| {
match foo {
None => {},
Some(x) => {
foo = Some(x);
}
}
println!("'{}'", foo.unwrap());
};
f();
}
fn main() {
foo();
}
````
Version information
````
rustc 1.83.0-nightly (8d94e06ec 2024-10-10)
binary: rustc
commit-hash: 8d94e06ec9758b5c03ea77bb5dab22a1a76bc261
commit-date: 2024-10-10
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir -Zcrate-attr=feature(non_lifetime_binders)`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
warning: the feature `non_lifetime_binders` is incomplete and may not be safe to use and/or cause compiler crashes
--> <crate attribute>:1:9
|
1 | feature(non_lifetime_binders)
| ^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #108185 <https://github.com/rust-lang/rust/issues/108185> for more information
= note: `#[warn(incomplete_features)]` on by default
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.UkDLZIiGWuhn/rustc_testrunner_tmpdir_reporting.sOtl1K9lpDK5/mvce.rs:10:2
|
10 | }
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.UkDLZIiGWuhn/rustc_testrunner_tmpdir_reporting.sOtl1K9lpDK5/mvce.rs`
warning: variable does not need to be mutable
--> /tmp/icemaker_global_tempdir.UkDLZIiGWuhn/rustc_testrunner_tmpdir_reporting.sOtl1K9lpDK5/mvce.rs:5:9
|
5 | let mut foo: Option<Box<_>> = Some(Box::new(8));
| ----^^^
| |
| help: remove this `mut`
|
= note: `#[warn(unused_mut)]` on by default
error: internal compiler error: compiler/rustc_const_eval/src/interpret/operand.rs:157:17: invalid immediate for given destination place: value ScalarPair(alloc5<imm>, 0x0000000000000001) does not match ABI Scalar(Initialized { value: Pointer(AddressSpace(0)), valid_range: 1..=18446744073709551615 }))
thread 'rustc' panicked at compiler/rustc_const_eval/src/interpret/operand.rs:157:17:
Box<dyn Any>
stack backtrace:
0: 0x71a61e1d6a5a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::ha20e66cea8d40da6
1: 0x71a61ea034a6 - core::fmt::write::hf2daaa73a1dc1c93
2: 0x71a61fc02e51 - std::io::Write::write_fmt::h378271f9be5fe2f8
3: 0x71a61e1d68b2 - std::sys::backtrace::BacktraceLock::print::hc64e8d2bde51fd49
4: 0x71a61e1d8d86 - std::panicking::default_hook::{{closure}}::h0d83bed2db8cab1c
5: 0x71a61e1d8bd0 - std::panicking::default_hook::h128ef1b79513126f
6: 0x71a61d22df6f - std[a9199b77cdff7b9b]::panicking::update_hook::<alloc[226919ebccef61a6]::boxed::Box<rustc_driver_impl[92ba7bf89b2038aa]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x71a61e1d9498 - std::panicking::rust_panic_with_hook::h5cd591a4b9cf7e7b
8: 0x71a61d267a51 - std[a9199b77cdff7b9b]::panicking::begin_panic::<rustc_errors[62337987763b3873]::ExplicitBug>::{closure#0}
9: 0x71a61d25aaf6 - std[a9199b77cdff7b9b]::sys::backtrace::__rust_end_short_backtrace::<std[a9199b77cdff7b9b]::panicking::begin_panic<rustc_errors[62337987763b3873]::ExplicitBug>::{closure#0}, !>
10: 0x71a61d25aab3 - std[a9199b77cdff7b9b]::panicking::begin_panic::<rustc_errors[62337987763b3873]::ExplicitBug>
11: 0x71a61d2712e1 - <rustc_errors[62337987763b3873]::diagnostic::BugAbort as rustc_errors[62337987763b3873]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x71a61d8ca144 - rustc_middle[d89ba7dfe97f8d9e]::util::bug::opt_span_bug_fmt::<rustc_span[80b8a8e8ce77617b]::span_encoding::Span>::{closure#0}
13: 0x71a61d8affca - rustc_middle[d89ba7dfe97f8d9e]::ty::context::tls::with_opt::<rustc_middle[d89ba7dfe97f8d9e]::util::bug::opt_span_bug_fmt<rustc_span[80b8a8e8ce77617b]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x71a61d8afe5b - rustc_middle[d89ba7dfe97f8d9e]::ty::context::tls::with_context_opt::<rustc_middle[d89ba7dfe97f8d9e]::ty::context::tls::with_opt<rustc_middle[d89ba7dfe97f8d9e]::util::bug::opt_span_bug_fmt<rustc_span[80b8a8e8ce77617b]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x71a61adf99a0 - rustc_middle[d89ba7dfe97f8d9e]::util::bug::bug_fmt
16: 0x71a61c1dbf23 - <rustc_const_eval[71c2be5c4d1e6846]::interpret::eval_context::InterpCx<rustc_const_eval[71c2be5c4d1e6846]::const_eval::machine::CompileTimeMachine>>::eval_rvalue_into_place
17: 0x71a61c173403 - rustc_const_eval[71c2be5c4d1e6846]::const_eval::eval_queries::eval_to_allocation_raw_provider
18: 0x71a61f11f2f6 - rustc_query_impl[121dfa8c420f6771]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[121dfa8c420f6771]::query_impl::eval_to_allocation_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 24usize]>>
19: 0x71a61f11eb1a - rustc_query_system[f00ba4d010ca6a7e]::query::plumbing::try_execute_query::<rustc_query_impl[121dfa8c420f6771]::DynamicConfig<rustc_query_system[f00ba4d010ca6a7e]::query::caches::DefaultCache<rustc_middle[d89ba7dfe97f8d9e]::ty::ParamEnvAnd<rustc_middle[d89ba7dfe97f8d9e]::mir::interpret::GlobalId>, rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[121dfa8c420f6771]::plumbing::QueryCtxt, false>
20: 0x71a61f11e6ef - rustc_query_impl[121dfa8c420f6771]::query_impl::eval_to_allocation_raw::get_query_non_incr::__rust_end_short_backtrace
21: 0x71a61f12055f - rustc_const_eval[71c2be5c4d1e6846]::const_eval::eval_queries::eval_to_const_value_raw_provider
22: 0x71a61f120376 - rustc_query_impl[121dfa8c420f6771]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[121dfa8c420f6771]::query_impl::eval_to_const_value_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 24usize]>>
23: 0x71a61f11eadd - rustc_query_system[f00ba4d010ca6a7e]::query::plumbing::try_execute_query::<rustc_query_impl[121dfa8c420f6771]::DynamicConfig<rustc_query_system[f00ba4d010ca6a7e]::query::caches::DefaultCache<rustc_middle[d89ba7dfe97f8d9e]::ty::ParamEnvAnd<rustc_middle[d89ba7dfe97f8d9e]::mir::interpret::GlobalId>, rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[121dfa8c420f6771]::plumbing::QueryCtxt, false>
24: 0x71a61f11e5f3 - rustc_query_impl[121dfa8c420f6771]::query_impl::eval_to_const_value_raw::get_query_non_incr::__rust_end_short_backtrace
25: 0x71a61b620a96 - <rustc_middle[d89ba7dfe97f8d9e]::ty::context::TyCtxt>::const_eval_resolve
26: 0x71a61f4d7074 - <rustc_mir_transform[494498406daaddfd]::gvn::VnState>::insert
27: 0x71a61f4d2d4d - <rustc_mir_transform[494498406daaddfd]::gvn::VnState>::simplify_operand
28: 0x71a61be885f4 - <rustc_mir_transform[494498406daaddfd]::gvn::GVN as rustc_mir_transform[494498406daaddfd]::pass_manager::MirPass>::run_pass
29: 0x71a61ea0b70d - rustc_mir_transform[494498406daaddfd]::pass_manager::run_passes_inner
30: 0x71a61ed082e2 - rustc_mir_transform[494498406daaddfd]::optimized_mir
31: 0x71a61ed06ba1 - rustc_query_impl[121dfa8c420f6771]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[121dfa8c420f6771]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 8usize]>>
32: 0x71a61ed58778 - rustc_query_system[f00ba4d010ca6a7e]::query::plumbing::try_execute_query::<rustc_query_impl[121dfa8c420f6771]::DynamicConfig<rustc_query_system[f00ba4d010ca6a7e]::query::caches::DefIdCache<rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[121dfa8c420f6771]::plumbing::QueryCtxt, false>
33: 0x71a61ed57d33 - rustc_query_impl[121dfa8c420f6771]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
34: 0x71a61bcd0cc4 - <rustc_middle[d89ba7dfe97f8d9e]::ty::context::TyCtxt>::instance_mir
35: 0x71a61ee7b30a - rustc_interface[70865a4857b6d971]::passes::run_required_analyses
36: 0x71a61f75c35e - rustc_interface[70865a4857b6d971]::passes::analysis
37: 0x71a61f75c331 - rustc_query_impl[121dfa8c420f6771]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[121dfa8c420f6771]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 1usize]>>
38: 0x71a61f8fc62e - rustc_query_system[f00ba4d010ca6a7e]::query::plumbing::try_execute_query::<rustc_query_impl[121dfa8c420f6771]::DynamicConfig<rustc_query_system[f00ba4d010ca6a7e]::query::caches::SingleCache<rustc_middle[d89ba7dfe97f8d9e]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[121dfa8c420f6771]::plumbing::QueryCtxt, false>
39: 0x71a61f8fc30f - rustc_query_impl[121dfa8c420f6771]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
40: 0x71a61f753b5e - rustc_interface[70865a4857b6d971]::interface::run_compiler::<core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>, rustc_driver_impl[92ba7bf89b2038aa]::run_compiler::{closure#0}>::{closure#1}
41: 0x71a61f7d9450 - std[a9199b77cdff7b9b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[70865a4857b6d971]::util::run_in_thread_with_globals<rustc_interface[70865a4857b6d971]::util::run_in_thread_pool_with_globals<rustc_interface[70865a4857b6d971]::interface::run_compiler<core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>, rustc_driver_impl[92ba7bf89b2038aa]::run_compiler::{closure#0}>::{closure#1}, core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>>::{closure#0}, core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>>
42: 0x71a61f7d9b17 - <<std[a9199b77cdff7b9b]::thread::Builder>::spawn_unchecked_<rustc_interface[70865a4857b6d971]::util::run_in_thread_with_globals<rustc_interface[70865a4857b6d971]::util::run_in_thread_pool_with_globals<rustc_interface[70865a4857b6d971]::interface::run_compiler<core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>, rustc_driver_impl[92ba7bf89b2038aa]::run_compiler::{closure#0}>::{closure#1}, core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>>::{closure#0}, core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[a4436d16a4783843]::result::Result<(), rustc_span[80b8a8e8ce77617b]::ErrorGuaranteed>>::{closure#1} as core[a4436d16a4783843]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
43: 0x71a61f7daa01 - std::sys::pal::unix::thread::Thread::new::thread_start::h1576647e75736287
44: 0x71a620e6139d - <unknown>
45: 0x71a620ee649c - <unknown>
46: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.83.0-nightly (8d94e06ec 2024-10-10) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z crate-attr=feature(non_lifetime_binders) -Z dump-mir-dir=dir
query stack during panic:
#0 [eval_to_allocation_raw] const-evaluating + checking `Brick::{closure#0}::promoted[0]`
#1 [eval_to_const_value_raw] simplifying constant for the type system `Brick::{closure#0}::promoted[0]`
end of query stack
error: aborting due to 2 previous errors; 2 warnings emitted
For more information about this error, try `rustc --explain E0601`.
```
</p>
</details>
<!--
query stack:
#0 [eval_to_allocation_raw] const-evaluating + checking `Brick::{closure#0}::promoted[0]`
#1 [eval_to_const_value_raw] simplifying constant for the type system `Brick::{closure#0}::promoted[0]`
-->
@rustbot label +F-non_lifetime_binders | I-ICE,T-compiler,C-bug,A-mir-opt-inlining,S-bug-has-test,F-non_lifetime_binders,A-mir-opt-GVN | low | Critical |
2,579,555,672 | next.js | Intercepting Routes doesn't works through rewrites | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Next.js 14.2.15
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.7.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 14.2.15 // Latest available version is detected (14.2.15).
eslint-config-next: 14.2.13
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
```
### Which example does this report relate to?
https://github.com/vercel/nextgram
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
Intercepting route does not work because the root goes through a rewrite
### Expected Behavior
The intercepting route should work as specified in the documentation but through a rewrite.
### To Reproduce
```
// next.config.js
async rewrites() {
return [
{
source: "/:locale/:vacancy/:id",
destination: "/:locale/vacancy/:id",
}
]
}
```
```
app
-- [locale]
-- @modal
-- (.)vacancy
-- [id]
-- page.tsx
-- vacancy
-- [id]
-- page.tsx
``` | Parallel & Intercepting Routes | low | Critical |
2,579,605,637 | deno | 'ArrayBufferView' is not assignable to parameter of type 'string | ArrayBufferView | Version: Deno 2.0
While trying to build an Astro site with Deno 2.0 I hit an error about mismatched types. I have managed to create a reproducible snippet that should help track this down:
```typescript
import { writeFileSync } from "node:fs";
writeFileSync("buffer.txt", returnBuffer());
function returnBuffer(): ArrayBufferView {
return new Uint8Array([1, 2, 3, 4, 5]);
}
```
This seems to be valid typescript, but `deno check` fails with:
```
error: TS2345 [ERROR]: Argument of type 'ArrayBufferView' is not assignable to parameter of type 'string | ArrayBufferView'.
Type 'ArrayBufferView' is missing the following properties from type 'DataView': getFloat32, getFloat64, getInt8, getInt16, and 19 more.
writeFileSync("buffer.txt", returnBuffer());
```
This could be a duplicate of https://github.com/denoland/deno/issues/22381
| needs investigation,types | low | Critical |
2,579,617,456 | svelte | Svelte 5: structuredClone tries to clone proxy object instead of its contents | ### Describe the bug
Attempting to use `structuredClone` on any stateful object will result in an error with zero indication of a problem before running the code in both JavaScript and TypeScript.
### Reproduction
https://svelte-5-preview.vercel.app/#H4sIAAAAAAAACm2QwUrEQAyGXyVEobtQ2nu3LojiwYvHPTgeujPpOjqblJmMq5S-uwwLHoq35OP7_0BmHH2ghN3rjDycCTu8nyasUX-msqQvCkpYY5IcbSF9stFPujdsNJBCThThDm6TDkqbuWCjpaoDg8_yzvAoZLDwZbszXAYrnCRQE-S0KfntDtoWDhI_E4ye6T-nKZ1rEYYEFwphHUgas9UcyT0EYboeuYafBh_IgQrQN9msBNXKrkAYqoNnJ5eqg5v-5fhBVvdgJQcHLApHAltU1xju27-PYI1ncX705LDTmGl5W34BHhXdL2IBAAA=
### Logs
_No response_
### System Info
```shell
System:
OS: Windows 11 10.0.22631
CPU: (20) x64 12th Gen Intel(R) Core(TM) i7-12700KF
Memory: 48.76 GB / 63.85 GB
Binaries:
Node: 20.17.0 - C:\Program Files\nodejs\node.EXE
npm: 10.8.2 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.3.0 - ~\AppData\Local\pnpm\pnpm.EXE
bun: 1.1.30 - ~\.bun\bin\bun.EXE
Browsers:
Edge: Chromium (127.0.2651.74)
npmPackages:
svelte: ^5.0.0-next.264 => 5.0.0-next.264
```
### Severity
annoyance | needs discussion | medium | Critical |
2,579,657,021 | electron | [Bug]: HTML select tag not rendering the dropdown | I'm reopening as this is consistently still an issue and I know adding comments to old closed issues (https://github.com/electron/electron/issues/29665) isn't very effective in exposing issues that have reoccurred โ more details coming shortly.
### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Electron Version
31.x
### What operating system are you using?
Mac
### What arch are you using?
x64
### Last Known Working Electron version
30.*.* (See [comment](https://github.com/electron/electron/issues/29665#issuecomment-2250003376))
### Expected behavior
When you add a select tag to your HTML and you click on it should open a dropdown with the options.
### Actual behavior
When you click the select but no dropdown appears. Reported and fixed previously in https://github.com/electron/electron/issues/29665 back in 2021. Some users have reported that disabling then re-enabling hardware acceleration fixed it for them. | platform/macOS,31-x-y | low | Critical |
2,579,672,126 | PowerToys | Shortcut to Correct Misspelled Words in Bulk Using Spellcheck Suggestions | ### Description of the new feature / enhancement
Iโd like to suggest a new feature that integrates a faster way to correct multiple misspelled words in a paragraph using a keyboard shortcut. When typing quickly, users often make minor spelling mistakes which are highlighted with red underlines by the spellchecker.
The idea is:
1. After writing a paragraph with several spelling errors (underlined in red), the user can use CTRL + ALT + Arrow Keys to quickly select a block of text (sentence, paragraph, or the entire document).
2. A new customizable shortcut (to be defined) would then allow PowerToys to automatically correct all misspelled words in the selected text, using the top suggestions from the spellchecker in one go.
### Scenario when this would be used?
This feature would be especially useful when dealing with long documents or emails, where manually correcting each individual mistake is time-consuming. I think this would be a really useful and cool feature. It came to mind because it would help me a lot in my work, and I hope it can be implemented. Iโm not a developer, so Iโm not sure how difficult it would be to create, but I trust in the strength of GitHub developers!
### Supporting information
1. High Frequency of Small Errors: Research shows that small, non-contextual errors such as typos are among the most common mistakes made by typists. According to a study analyzing over 136 million keystrokes, fast typists often make fewer large mistakes but still produce small errors like extra or missing letters due to the speed of typing. These small errors require quick correction, which currently takes significant manual effort.
https://www.sciencedaily.com/releases/2018/04/180405101720.htm
2. Time Wasted on Corrections: Minor errors like typos account for up to 58% of data inaccuracies in business settings. These mistakes can lead to wasted time and effort when trying to manually correct them. In fact, inefficiencies related to manual corrections cost businesses significant resources every year. For example, companies lose 20-30% of their revenue due to time spent on correcting such small issues.
https://www.datasciencecentral.com/are-typos-and-small-mistakes-making-your-business-data-inaccurate/
3. Cognitive Load and Proofreading Efficiency: Studies show that certain proofreading methods can improve the detection of minor errors, but many existing tools or processes don't fully capitalize on these findings. Reading aloud is one proven technique to catch small mistakes more effectively, yet it's time-consuming and impractical for many. This highlights the need for more automated, efficient solutions like the bulk correction shortcut you're suggesting.
https://www.psychologytoday.com/intl/blog/your-future-self/202204/simple-and-effective-cognitive-method-catch-typos-and-other-errors
In summary, small spelling mistakes are frequent, consume a disproportionate amount of time to fix, and impact productivity, especially in professional environments. A feature that allows users to correct multiple misspelled words at once would address a widespread issue of wasted time and improve efficiency. | Needs-Triage | low | Critical |
2,579,700,748 | rust | `rustdoc` lint to flag potential intra-doc links | One of the common review comments in the Linux kernel is that there is a missed intra-doc link. It would be nice to have a lint (or two) to flag these cases automatically. In addition, such a lint would help new Rust developers become aware of what intra-doc links are.
There are two cases: the potential intra-doc link has backticks already, or it does not even have backticks (i.e. it was not even formatted properly using Markdown). Examples taken from LKML reviews:
```rust
/// Flags passed to `XArray::new` to configure the `XArray`.
/// Flags passed to [`XArray::new`] to configure the [`XArray`].
```
```rust
/// Converts the whole CString to lowercase.
/// Converts the whole [`CString`] to lowercase.
```
Both cases will probably have false positives and thus projects may want to trigger them as an opt-in build mode (e.g. in the kernel we may run with a bigger set of warnings enabled to spot potential cleanups).
Due to the false positives, it may make sense to allow projects to customize what kind of "kinds" of entities to run it for, especially for the no-backticks case. For instance, for types or constants with multi-word names in projects following the usual naming conventions (e.g. `MyType` or `MY_CONSTANT`), false positives may be unlikely. If someone has written "Returns MyType", then it is likely the lint applies; however, if they wrote "This item may run something" it is likely they didn't mean to link to a function called `run`.
Additionally, users may want to run the no-backtick lint even if they do not want to actually add intra-doc links, since it could spot missing backticks (i.e. bad formatting) with less false positives (since the compiler has more information). Clippy has a related lint, [`doc_markdown`](https://rust-lang.github.io/rust-clippy/master/index.html#/doc_markdown), which helps identifying missing backticks.
These lints could be part of Clippy, but `rustdoc` already knows what could be an intra-doc link or not.
Cc @GuillaumeGomez
@rustbot label T-rustdoc
@rustbot label A-intra-doc-links
@rustbot label A-rust-for-linux | T-rustdoc,A-lints,C-feature-request,A-intra-doc-links,A-rust-for-linux | low | Minor |
2,579,763,099 | rust | Tracking Issue for GPU-offload | <!--
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This is a tracking issue for the GPU offload experiment.
The feature gate for the issue will be `#![feature(gpu_offload)]`.
This work is part of the [project goal](https://rust-lang.github.io/rust-project-goals/2024h2/Rust-for-SciComp.html) for Scientific Computing in Rust.
This project will use LLVM's offload infrastructure, which is already used to support OpenMP Offload in other languages like C++.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [x] Get lang experiment approved.
- We approved this in the lang triage meeting on 2024-05-01.
- [ ] Land the experimental implementation in nightly.
- https://github.com/rust-lang/rust/pull/131527
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Formatting for new syntax has been added to the [Style Guide] ([nightly-style-procedure])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Implementation history
<!--
Include a list of all the PRs that were involved in implementing the feature.
-->
| T-lang,T-compiler,T-bootstrap,C-tracking-issue,F-gpu_offload | low | Critical |
2,579,797,833 | ollama | Qwen 2.5 72B missing stop parameter | ### What is the issue?
Often doesn't stop generating...

PARAMETER stop <|endoftext|>
seems to be missing in the model configuration. Adding it solved the problem.
### OS
Ubuntu 22.04.5 LTS (but ollama runs in official docker container)
### GPU
2 x RTX 4090
### CPU
AMD Ryzen 9 7950X
### Ollama version
0.3.12 | bug | low | Major |
2,579,814,800 | rust | Tracking Issue for `const_is_char_boundary` | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(const_is_char_boundary)]`
This is a tracking issue for using `str::is_char_boundary` in `const`, which allows checking that `index`-th byte is the first byte in a UTF-8 code point sequence or the end of the string during const-eval.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
// core::str
impl str {
pub const fn is_char_boundary(&self, index: usize) -> bool;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] Implementation: #131520
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,579,818,572 | go | all: end support for macOS 11 in Go 1.25 | ### Proposal Details
Go added support for macOS 15 that was publicly released this September. After internal discussion, we believe the summer of 2025 will be a good time to drop support for macOS 11. The last security update it received was in [September 2023](https://support.apple.com/en-us/106365).
On behalf of @golang/release, I propose we announce the end of support in the Go 1.24 release notes, and disable the builder for Go 1.25. Go 1.24 (security patches until Feb 2026) will be the last release with macOS 11 support.
For context, https://github.com/golang/go/issues/64207#issuecomment-1843590873 contains some history as well as future patterns if we keep the same cadence.
CC @golang/darwin.
| Proposal,Proposal-Accepted,early-in-cycle | low | Major |
2,579,853,185 | rust | Tracking Issue for `const` `str::split_at*` | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(const_str_split_at)]`
This is a tracking issue for using `str::split_at`, `str::split_at_mut`, `str::split_at_checked`, `str::split_at_mut_checked` in `const`, which allows dividing one (mutable) string slice into two (fallibly) at an index during const-eval.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
// core::str
impl str {
pub const fn split_at(&self, mid: usize) -> (&str, &str);
pub const fn split_at_mut(&mut self, mid: usize) -> (&mut str, &mut str);
pub const fn split_at_checked(&self, mid: usize) -> Option<(&str, &str)>;
pub const fn split_at_mut_checked(&mut self, mid: usize) -> Option<(&mut str, &mut str)>;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] Implementation: #131520
- [ ] Blocked on #131516
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,579,854,870 | godot | GPU particles won't emit subparticles at end when parent FPS is sufficiently higher than child FPS | ### Tested versions
Godot v4.4.dev (db66bd35a)
Godot v4.3.stable.mono
### System information
Godot v4.4.dev (76a135926) - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated AMD Radeon RX 6650 XT (Advanced Micro Devices, Inc.; 31.0.24033.1003) - AMD Ryzen 7 3800X 8-Core Processor (16 threads)
### Issue description
If a parent GPU particle node's FPS is significantly greater than its sub emitter, it will not emit particles. Doing a deep dive into the matter I believe has exposed the main culprit; `vkCmdCopyBuffer` calls will write to the sub-emitter buffer while a read on the next frame is required to reset and activate particles in the child.
As you can see here, the source/dest buffer is written to as 2 particles were set to be emitted by `emit_subparticle` within the compute shader.

In addition you can tell that at this step in the frame, another `vkCmdCopyBuffer` is queued working over the same buffers as in EID 56 and 57...and as you can see on the next copy to buffer, the value is reset

The compute shader being dispatched at this point are `particles.glsl` shaders of the parent, as the child runs its particles compute and copy particles compute before the parent dispatches for additional context. This is important because the child shader runs at the beginning of the compute step in the frame, and relies on the buffer not having a 0 value in order to reset particles.

As demonstrated above, the "flushing" of particles is done on the first part of the compute pass with the amount to reset set on the previous frame.
It is also possible that accumulation of fractional delta values when using non-integer lifetimes can allow particles to sometimes be emitted, as the write to the buffer saying new particles were created can happen at the end, it just isn't likely while using integer lifetime values for the parent...although using fractional lifetime values and changing the lifetime to an integer amount can retain enough fractional accumulation that the emission trigger will occur at the end of the compute stage, making it appear as if the bug sometimes is present and sometimes not.
As far as a fix goes, my thought is that in particles storage that you'd need to read the particle count value from the emission buffer and only write to that buffer if the particles count is less than 0, although I am not sure if that is just a band-aid solution. As such I think a fix may warrant a discussion (although I have tested locally that it does indeed work for the test case).
### Steps to reproduce
Create a GPU particles parent node
set its FPS to 60
Create a GPU particles child node
Set as the child
retain its FPS of 30.
See how no particles are generated
### Minimal reproduction project (MRP)
[particlebugreport.zip](https://github.com/user-attachments/files/17334214/particlebugreport.zip)
| bug,topic:particles | low | Critical |
2,579,871,530 | pytorch | _register_as_effectful_op_temporarily in torch/_ops.py appears to remove any previously registered effects | ### ๐ Describe the bug
I don't have simple reproduction steps for this problem but I'm trying to use a C++ custom op with a custom C++ class type argument (`torch._C.ScriptObject`) with the inductor.
I run into this assert initially:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <torch._higher_order_ops.effects.WithEffects object at 0x73dffa0f3bb0>
token = FakeTensor(..., size=(0,), dtype=torch.bool), op = <OpOverload(op='_C.gptq_marlin_24_gemm', overload='default')>
args = (FakeTensor(..., device='cuda:0', size=(1, 128), dtype=torch.float16), FakeTensor(..., device='cuda:0', size=(4, 1024)..., size=(256,), dtype=torch.int32), <torch._library.fake_class_registry.FakeScriptObject object at 0x73dfd8cbda50>, ...)
kwargs = {}
def __call__(
self,
token,
op: OpType,
*args: Tuple[Any, ...],
**kwargs: Dict[str, Any],
) -> Tuple[Any, ...]:
assert isinstance(op, (torch._ops.HigherOrderOperator, torch._ops.OpOverload))
assert not has_aliasing(op), "Ops with aliasing is not supported"
> assert has_effects(op, args, kwargs)
E AssertionError: While executing %with_effects : [num_users=2] = call_function[target=torch.ops.higher_order.with_effects](args = (%_make_token_default, _C.gptq_marlin_24_gemm.default, %arg1_1, %arg6_1, %arg5_1, %arg4_1, %arg3_1, %arg2_1, 1, 512, 128), kwargs = {})
E Original traceback:
E File "/home/bnell/nm-vllm-new/tests/kernels/test_marlin_gemm.py", line 267, in marlin_24_gemm_tester
E return ops.gptq_marlin_24_gemm(
E File "/home/bnell/nm-vllm-new/vllm/_custom_ops.py", line 34, in wrapper
E return fn(*args, **kwargs)
E File "/home/bnell/nm-vllm-new/vllm/_custom_ops.py", line 297, in gptq_marlin_24_gemm
E return torch.ops._C.gptq_marlin_24_gemm(a, b_q_weight, b_meta, b_scales,
../pt24/lib/python3.10/site-packages/torch/_higher_order_ops/effects.py:80: AssertionError
```
While debugging, I noticed that `has_effects()` returns `True` for this op until `_register_as_effectful_op_temporarily` is called on it at some point. It also appears calling `has_effects()` can modify `SIDE_EFFECTS` via `get_effect_key()`.
Looking at the code for that function, it looks like it always removes op from the `SIDE_EFFECTS` dict even if it might have been in there before calling `_register_as_effectful_op_temporarily`. If I hack `_register_as_effectful_op_temporarily` to keep pre-existing entries in the `SIDE_EFFECTS` dict I get past the error above and die later on in the inductor.
```
@contextlib.contextmanager
def _register_as_effectful_op_temporarily(self):
from torch._higher_order_ops.effects import (
_EffectType,
_register_effectful_op,
SIDE_EFFECTS,
)
try:
if self not in SIDE_EFFECTS:
_register_effectful_op(self, _EffectType.ORDERED)
yield
finally:
if self in SIDE_EFFECTS: <--- this seems to delete self even if it was registered already?
del SIDE_EFFECTS[self]
```
cc @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @laithsakka , @zou3519
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0+cu124
[pip3] torchaudio==2.5.0.dev20240919+cu121
[pip3] torchvision==0.20.0.dev20240919+cu121
[pip3] triton==3.0.0
[conda] Could not collect | triaged,module: custom-operators,oncall: pt2,module: higher order operators,module: pt2-dispatcher | low | Critical |
2,579,885,146 | deno | Installing a package with multiple bin entries | I'm installing `pyright`, and it works really well, but it's supposed to install two bin entries and Deno installs only first.
https://github.com/microsoft/pyright/blob/main/packages/pyright/package.json
```json
"bin": {
"pyright": "index.js",
"pyright-langserver": "langserver.index.js"
}
```
Workaround is to create `bin/pyright-langserver` manually with
```sh
#!/bin/sh
exec deno run --allow-all --no-config --no-lock 'npm:pyright/langserver.index.js' "$@"
```
Can this be fixed?
Version: Deno 2.0.0 | bug,install | low | Minor |
2,579,917,469 | ollama | Fine-tuned Llama 3.2 1B safe_serialized: Error: json: cannot unmarshal array into Go struct field .model.merges of type string | ### What is the issue?
Modelfile:
```
FROM ./model
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.2
PARAMETER top_p 0.9
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
```
./model:
ยดยดยด
model-00001-of-00003.safetensors
config.json
generation_config.json
model-00002-of-00003.safetensors
model-00003-of-00003.safetensors
model.safetensors.index.json
special_tokens_map.json
tokenizer_config.json
tokenizer.json
ยดยดยด
Command:
```
ollama create llama3.2-1B -f ./Modelfile
```
Error:
```
transferring model data 100%
converting model
Error: json: cannot unmarshal array into Go struct field .model.merges of type string
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12 | bug,create | medium | Critical |
2,579,923,155 | flutter | Remove software rendering toggles on iOS. | AFAICT, there is no PList flag but there is a command line flag used thats read on iOS. From @jonahwilliams, we exposed software rendering as a faster way to render on simulators when we only had an OpenGL backend and OpenGL on the simulators was a software implementation on macOS. We have since switched to Metal and the minimum requirements specify Metal on macOS.
Since there never was a PList flag, its hard to so how or if this is being used. We should clean up the entries. | P2,c: tech-debt,team-engine,triaged-engine | low | Minor |
2,579,933,457 | flutter | MaterialPage and Page Equality Broken in Navigator Comparison | ### Steps to reproduce
The Navigator widget's `pages` property accepts a list a pages. The issue is that the internal logic for comparing page quality doesn't compare Page equality correctly.
This results in all the pages in the navigation stack rebuilding, which is bad for performance.
**Example**

In this example, you can toggle between recreating the page list versus maintaining the instances of the page models. When the pages are recreated, every route is rebuilt when navigating. Watch the console for duplicate build print statements.
**Console Log**
> Add New Page 3
>
> Rebuild Page 3
> Rebuild Page 2
> Rebuild Page 1
> Rebuild Page 0
**Page Equality Tests**
Tests demonstrating Page equality is only preserved by using the same instance. Even Pages with the exact same properties are not equal.
<details><summary>Code</summary>
<p>
```dart
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:flutter_test/flutter_test.dart';
void main() {
group('MaterialPage Equality Tests', () {
test('MaterialPages with same properties should be equal', () {
final page1 = MaterialPage(
key: const ValueKey('test'),
name: 'TestPage',
arguments: {'id': 1},
child: const Text('Test'),
);
final page2 = MaterialPage(
key: const ValueKey('test'),
name: 'TestPage',
arguments: {'id': 1},
child: const Text('Test'),
);
expect(page1 == page2, isTrue,
reason:
'MaterialPages with identical properties should be considered equal');
});
test('List of MaterialPages considers new instances as different', () {
final List<Page> originalPages = [
MaterialPage(
key: const ValueKey('page1'),
name: 'Page1',
child: const Text('Page 1'),
),
MaterialPage(
key: const ValueKey('page2'),
name: 'Page2',
child: const Text('Page 2'),
),
];
final List<Page> newPages = [
MaterialPage(
key: const ValueKey('page1'),
name: 'Page1',
child: const Text('Page 1'),
),
MaterialPage(
key: const ValueKey('page2'),
name: 'Page2',
child: const Text('Page 2'),
),
];
expect(listEquals(originalPages, newPages), isTrue,
reason:
'Lists with new instances of MaterialPages are incorrectly considered different');
});
});
}
```
</p>
</details>
### Expected results
The key expectation is that MaterialPage instances should be considered equal if they have the same key and essential properties, even if they are separate instances. This expectation aligns with how Flutter handles widget rebuilding in other contexts. For example, when a StatelessWidget or StatefulWidget is rebuilt with the same key and type, Flutter optimizes the rebuild process, understanding that the underlying structure remains the same.
In the context of navigation, this behavior is particularly important. When updating the navigation stack, developers often recreate the list of pages, perhaps adding or removing pages based on the application's state. The expectation is that if a MaterialPage in the new list has the same key and essential properties (like name, arguments, and the type of the child widget) as a page in the existing list, it should be treated as the same page. This would allow the Navigator to optimize its update process, avoiding unnecessary rebuilds of routes that haven't fundamentally changed.
The use of keys in Flutter is generally a signal for maintaining identity across rebuilds. When a developer assigns a key to a MaterialPage, they are essentially saying, "This page represents the same logical screen or route, even if I'm creating a new instance of it." The navigation system should respect this intent, using the key as the primary means of identifying and comparing pages.
I expect the Navigator to treat these two MaterialPages as the same.
```dart
MaterialPage(
key: ValueKey('page_1'),
child: TestPage(title: 'Page 1'),
),
```
should be treated as the same page by the Navigator as
```dart
MaterialPage(
key: ValueKey('page_1'),
child: TestPage(title: 'Page 1'),
),
```
The keys are the same!
### Actual results
The internal logic of the Navigator does not respect the Page's Key for equality. The reason this happens is when a new MaterialPage is created, even with the same properties, it gets a new identity in memory. If it doesn't have a specific LocalKey set in the Navigator, this new identity causes it to be treated as a different page.
This means that even though two MaterialPages are equal, the Navigator might still treat them as different due to their memory identity and an existing LocalKey not being assigned.
This doesn't match the documented or expected behavior that says pages are compared via Keys.
`navigator.dart`
https://github.com/flutter/flutter/blob/82ebb74c6411afc55a8b0937535d6baeb03ea826/packages/flutter/lib/src/widgets/navigator.dart#L4173
```dart
if (
nextPage.key == null ||
!pageKeyToOldEntry.containsKey(nextPage.key) ||
!pageKeyToOldEntry[nextPage.key]!.canUpdateFrom(nextPage)
) {
// Create a new entry
} else {
// Update existing entry
final _RouteEntry matchingEntry = pageKeyToOldEntry.remove(nextPage.key)!;
matchingEntry.route._updateSettings(nextPage);
newHistory.add(matchingEntry);
}
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Navigation Rebuild Test',
home: const NavigationRebuildTest(),
);
}
}
class NavigationRebuildTest extends StatefulWidget {
const NavigationRebuildTest({super.key});
@override
State<NavigationRebuildTest> createState() => _NavigationRebuildTestState();
}
class _NavigationRebuildTestState extends State<NavigationRebuildTest> {
List<Page<dynamic>> currentPages = [];
bool recreateList = true;
@override
void initState() {
super.initState();
currentPages = [
MaterialPage(
key: const ValueKey('page1'),
child: const TestPage(title: 'Page 1'),
),
];
}
void _addPage() {
setState(() {
if (recreateList) {
currentPages = [
...currentPages.map((page) {
if (page is MaterialPage) {
return MaterialPage(
key: page.key,
name: page.name,
arguments: page.arguments,
fullscreenDialog: page.fullscreenDialog,
child: page.child,
);
}
return page;
}),
MaterialPage(
key: ValueKey('page${currentPages.length + 1}'),
child: TestPage(title: 'Page ${currentPages.length + 1}'),
),
];
} else {
currentPages = [
...currentPages,
MaterialPage(
key: ValueKey('page${currentPages.length + 1}'),
child: TestPage(title: 'Page ${currentPages.length + 1}'),
),
];
}
});
}
void _toggleRecreateList(bool value) {
setState(() {
recreateList = value;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Navigation Rebuild Test'),
actions: [
Row(
children: [
const Text('Recreate List'),
Switch(
value: recreateList,
onChanged: _toggleRecreateList,
),
],
),
],
),
body: Navigator(
pages: currentPages,
onPopPage: (route, result) {
if (!route.didPop(result)) {
return false;
}
setState(() {
currentPages.removeLast();
});
return true;
},
),
floatingActionButton: FloatingActionButton(
onPressed: _addPage,
child: const Icon(Icons.add),
),
);
}
}
class TestPage extends StatelessWidget {
final String title;
const TestPage({super.key, required this.title});
@override
Widget build(BuildContext context) {
debugPrint('Building $title'); // This print statement helps track rebuilds
return Scaffold(
appBar: AppBar(title: Text(title)),
body: Center(child: Text(title)),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4317], locale en-US)
โข Flutter version 3.24.3 on channel stable at D:\flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at C:\Users\searc\AppData\Local\Android\sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[X] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.9.1)
โข Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
โข Visual Studio Community 2022 version 17.9.34616.47
โข Windows 10 SDK version 10.0.22621.0
[โ] Android Studio (version 2024.2)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[โ] VS Code (version 1.91.0)
โข VS Code at C:\Users\searc\AppData\Local\Programs\Microsoft VS Code
โข Flutter extension version 3.98.0
[โ] Connected device (3 available)
โข ONEPLUS A6013 (mobile) โข 26bd994b โข android-arm64 โข Android 11 (API 30)
โข Windows (desktop) โข windows โข windows-x64 โข Microsoft Windows [Version 10.0.22631.4317]
โข Edge (web) โข edge โข web-javascript โข Microsoft Edge 129.0.2792.79
[โ] Network resources
โข All expected network resources are available.
```
</details>
| framework,f: material design,f: routes,P3,team-framework,triaged-framework | medium | Critical |
2,579,934,807 | PowerToys | Advanced Paste Suggestion | ### Description of the new feature / enhancement
Much like plain paste i would love if it also has a Paste as Keystrokes Option,
### Scenario when this would be used?
pasting into Paste disabled Forms/locations
### Supporting information
Code i currently use Via AutoHotkey
```
^#Numpad1::
SendRaw %Clipboard%
``` | Needs-Triage | low | Minor |
2,579,937,566 | PowerToys | powertoys window title background | ### Description of the new feature / enhancement
configuration window title is always black background to me, like preset / hardcoded :/ would look better and easier to drag if following the color set in Windows theme / config
### Scenario when this would be used?
Hard to pick powertoys config window to drag it around the screen, when using windows dark mode
### Supporting information

| Needs-Triage | low | Minor |
2,579,957,002 | vscode | mergetool conflict resolution auto applying combination as manual resolution |
Type: <b>Bug</b>
Somewhat recently (Iโd guess in the last 1-2 weeks โ Iโve been doing a lot of history rewriting lately), VSCodeโs mergetool started auto applying what I think are combinations as manual resolutions.
Data prep:
- Create a simple PowerShell script file (other files likely work โ I just noticed the problem with PowerShell)
- In the PowerShell script, create a function and call that function
- Add and commit the new file. I used the git cli.
- In the PowerShell script, create a second function and call that function, adding the new function call after the first
- Add and commit the change
- Start an interactive rebase
- Edit the first commit
- In the PowerShell script, add a parameter to the first (only) function
- In the PowerShell script, use the new parameter in the first function
- Commit this change by amending it to the first commit
- Continue the rebase
- You should get a merge conflict
Minimal-ish example:
- Open VSCode as a mergetool (I have it specified in my global git config)
- The first confict doesnโt make sense to me. There seems to be a conflict between the changes to the body in first function (LOCAL) and the entire second function (REMOTE).
- The second conflict is between the second function (REMOTE) and the lack of a second function (LOCAL). This one makes sense to me.
- The third conflict is between the new parameter to the first function (LOCAL) and the call to the second function (REMOTE). This one also makes sense to me.
- In the result pane, the first conflict shows as manually resolved when I havenโt made any changes.
- Clicking accept combination for the second conflict on the REMOTE side duplicates the second function.
To help with the example, hereโs what the code should end up looking like (I used kdiff to resolve the merge conflicts):
```
function Invoke-TestWrite {
param(
[string]$testParam
)
Write-Output "Hello, World!"
Write-Output "Hello, $testParam!"
}
function Invoke-SecondFunction{
Read-Host -Prompt "Hit enter to continue"
}
Invoke-TestWrite -testParam "VSCode"
Invoke-SecondFunction
```
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz (20 x 2808)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.72GB (8.39GB free)|
|Process Argv|. --crash-reporter-id c583746a-5e00-4137-9bf9-b978f05fe45c|
|Screen Reader|no|
|VM|50%|
</details><details><summary>Extensions (19)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
gitlens|eam|15.6.0
EditorConfig|Edi|0.16.4
copilot|Git|1.238.0
copilot-chat|Git|0.21.1
csdevkit|ms-|1.11.14
csharp|ms-|2.50.25
vscode-dotnet-runtime|ms-|2.2.0
vscodeintellicode-csharp|ms-|2.1.11
cmake-tools|ms-|1.19.52
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
powershell|ms-|2024.2.2
psi-header|psi|1.23.1
markdown-preview-enhanced|shd|0.8.14
rewrap|stk|1.16.3
code-spell-checker|str|3.0.1
cmake|twx|0.0.17
material-theme|zhu|3.17.5
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyonecf:30548226
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
nativeloc2:31134642
wkspc-ranged-t:31151552
cf971741:31144450
defaultse:31146405
iacca1:31156133
notype1cf:31151524
showchatpanel:31153267
5fd0e150:31155592
```
</details>
<!-- generated by issue reporter --> | bug,merge-editor | low | Critical |
2,579,964,434 | ollama | ollama process uses 1 gb of memory when it's idle due to embedded runners | ### What is the issue?
```
user@magicbook14:~$ ollama --version
ollama version is 0.3.12
user@magicbook14:~$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
```

I'm pretty sure it's some kind of regress, because on previous versions I didn't notice that high memory usage when no models were loaded. The system it's running on is laptop with AMD Ryzen 7 APU - 5700, no discreet GPU is there, just an integrated one in CPU
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.12 | feature request,linux | low | Minor |
2,580,009,503 | kubernetes | `system:monitoring` lacks access to kubelet /metrics endpoint | ### What happened?
Discussion began in https://github.com/kubernetes/enhancements/pull/4830#discussion_r1794149005 where it was identified that [`system:monitoring` cluster role](https://github.com/kubernetes/kubernetes/blob/release-1.31/staging/src/k8s.io/apiserver/pkg/authentication/user/user.go#L73) does not allow access to kubelet's /metrics and /metrics/slis endpoint
### What did you expect to happen?
Would have expected the test described [here](https://github.com/kubernetes/enhancements/pull/4830#discussion_r1794426008) to pass
### How can we reproduce it (as minimally and precisely as possible)?
Summarized [here](https://github.com/kubernetes/enhancements/pull/4830#discussion_r1794426008)
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# v1.31.0
```
</details>
### Cloud provider
<details>
GCP
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,priority/important-longterm,triage/accepted | low | Major |
2,580,029,981 | react-native | Inconsistent separator lines | ### Description
The thickness of the separator lines in FlatList is inconsistent on android. Here's another issue related to the separator: [GitHub Issue](https://github.com/facebook/react-native/issues/36408).
<img width="261" alt="Captura de Tela 2024-10-10 aฬs 20 08 25" src="https://github.com/user-attachments/assets/b8cae2cb-d774-49d2-a145-f05b08d6b428">
### Steps to reproduce
Add a separator to a FlatList with hairlineWidth or 1.
### React Native Version
0.73.6
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
Expo 51 from snack.
```
### Stacktrace or Logs
```text
import React from 'react';
import {
SafeAreaView,
View,
FlatList,
StyleSheet,
Text,
StatusBar,
} from 'react-native';
const DATA = [...Array(100).keys()]
const Separator = () => <View style={{ height: StyleSheet.hairlineWidth, backgroundColor: '#000' }} />
const Item = ({title}) => (
<View style={styles.item}>
<Text style={styles.title}>{title}</Text>
</View>
);
const App = () => {
return (
<SafeAreaView style={styles.container}>
<FlatList
data={DATA}
renderItem={({item}) => <Item title={item} />}
ItemSeparatorComponent={Separator}
/>
</SafeAreaView>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
marginTop: StatusBar.currentHeight || 0,
},
item: {
backgroundColor: 'white',
padding: 12,
},
title: {
fontSize: 20,
},
});
export default App;
```
### Reproducer
https://snack.expo.dev/@diegolmello/f6c27a
### Screenshots and Videos
<img width="261" alt="Captura de Tela 2024-10-10 aฬs 20 08 25" src="https://github.com/user-attachments/assets/5058dc6b-4e4c-4894-afa6-8517ffcbbb67">
| Issue: Author Provided Repro,Needs: Author Feedback,Newer Patch Available | low | Minor |
2,580,034,698 | deno | Adding WebPack to NestJs App - Service ran out of memory and aboorted && Uncaught TypeError: Cannot read properties of undefined (reading 'WORKER_DATA') | Version: Deno 2.0.0
Wanted to use webpack HMR for my nestjs app,
```ts
// webpack-hmr.config.js
// eslint-disable-next-line @typescript-eslint/no-require-imports
const nodeExternals = require('webpack-node-externals');
// eslint-disable-next-line @typescript-eslint/no-require-imports
const { RunScriptWebpackPlugin } = require('run-script-webpack-plugin');
// eslint-disable-next-line @typescript-eslint/no-require-imports
const ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin');
module.exports = function (options, webpack) {
return {
...options,
entry: [
'webpack/hot/poll?100',
...(Array.isArray(options.entry) ? options.entry : [options.entry]),
],
externals: [
nodeExternals({
allowlist: ['webpack/hot/poll?100'],
}),
],
plugins: [
...options.plugins,
new webpack.HotModuleReplacementPlugin(),
new webpack.WatchIgnorePlugin({
paths: [/\.js$/, /\.d\.ts$/],
}),
new RunScriptWebpackPlugin({
name: options.output.filename,
autoRestart: false,
}),
new ForkTsCheckerWebpackPlugin({
async: true,
typescript: {
configFile: 'tsconfig.build.json',
mode: 'write-references',
},
}),
],
};
};
```
then when i run `Task start:dev nest build --webpack --webpackPath webpack-hmr.config.js --watch`
The Process starts normally: ` Info Webpack is building your sources...`
Then logs different errors:
```
error: Uncaught TypeError: Cannot read properties of undefined (reading 'WORKER_DATA')
at getRpcWorkerData (file:///Users/thehoracle/Documents/webdev/nestjs/nest-fundamentals/node_modules/.pnpm/fork-ts-checker-webpack-plugin@9.0.2_typescript@5.3.3_webpack@5.94.0/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-worker.js:77:34)
at Object.<anonymous> (file:///Users/thehoracle/Documents/webdev/nestjs/nest-fundamentals/node_modules/.pnpm/fork-ts-checker-webpack-plugin@9.0.2_typescript@5.3.3_webpack@5.94.0/node_modules/fork-ts-checker-webpack-plugin/lib/typescript/worker/lib/worker-config.js:5:45)
at Object.<anonymous> (file:///Users/thehoracle/Documents/webdev/nestjs/nest-fundamentals/node_modules/.pnpm/fork-ts-checker-webpack-plugin@9.0.2_typescript@5.3.3_webpack@5.94.0/node_modules/fork-ts-checker-webpack-plugin/lib/typescript/worker/lib/worker-config.js:7:4)
at Module._compile (node:module:748:34)
at Object.Module._extensions..js (node:module:767:10)
at Module.load (node:module:665:32)
at Function.Module._load (node:module:537:12)
at Module.require (node:module:684:19)
at require (node:module:808:16)
at Object.<anonymous> (file:///Users/thehoracle/Documents/webdev/nestjs/nest-fundamentals/node_modules/.pnpm/fork-ts-checker-webpack-plugin@9.0.2_typescript@5.3.3_webpack@5.94.0/node_modules/fork-ts-checker-webpack-plugin/lib/typescript/worker/lib/typescript.js:4:25)
```
```
RpcExitError: Process 10138 exited with code 1
Issues checking service aborted - probably out of memory. Check the `memoryLimit` option in the ForkTsCheckerWebpackPlugin configuration.
If increasing the memory doesn't solve the issue, it's most probably a bug in the TypeScript.
RpcExitError: Process 10139 exited with code 1
Issues checking service aborted - probably out of memory. Check the `memoryLimit` option in the ForkTsCheckerWebpackPlugin configuration.
If increasing the memory doesn't solve the issue, it's most probably a bug in the TypeScript.
```
This does not happen when i run `pnpm run start:dev`. Only with deno.
| bug,node compat | low | Critical |
2,580,034,727 | opencv | How to contribute to OpenCV University? | I have developed https://opencv.onweb.dev/ an easy to use, zero installation platform for experimenting with OpenCV and Python in the browser. It is based on WebAssembly and hence needs almost zero cloud infrastructure/resources to run. Another advantage is that it makes it possible to learn about and to experiment with OpenCV on platforms such as Chromebooks and on Android mobiles and tablets.
I would like to contribute this to OpenCV University, however, I cannot find the repository for the site... | feature | low | Minor |
2,580,042,494 | godot | Track of Current Animation Crash | ### Tested versions
4.3.stable
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 32.0.15.6109) - AMD Ryzen 7 6800H with Radeon Graphics (16 Threads)
### Issue description
If you create a property track in animation player, which tracks current animation. It will ัrash engine, and if you save it, it will ัrash engine in every start or at least when you try to open this Animation Player.
https://github.com/user-attachments/assets/bee6354d-31d5-43bd-a6bc-acce66a60728
### Steps to reproduce
Create Animation Player (AP)
Create Animation in AP
Add new Property Track on Current Animation of Ap.
Add any key to Property Track
Save file
Crash
If you try to open AP it will Crash
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/17335329/test.zip) | bug,topic:editor,crash,topic:animation | low | Critical |
2,580,109,971 | pytorch | Graph break on wait() of Torchrec's Awaitable instance | ### ๐ Describe the bug
When we call the wait() on an Awaitable instance, it causes a graph break. But wait operation should already be supported, we need support this case as well.
### Error logs
```
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] Graph break in user code at /data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torchrec/distributed/comm_ops.py:1983
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] Reason: Unsupported: call_method UserDefinedObjectVariable(instancemethod) __call__ [] {}
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] User code traceback:
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/scripts/jchae/torch_playground/playground.py", line 111, in wait_fn
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] res = awaitable.wait()
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torchrec/distributed/types.py", line 335, in wait
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] ret: W = self._wait_impl()
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torchrec/distributed/comm_ops.py", line 121, in _wait_impl
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] ret = self.wait_function.apply(self.pg, self, self.dummy_tensor)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torchrec/distributed/comm_ops.py", line 1983, in forward
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] myreq.req.wait()
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] Traceback (most recent call last):
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 616, in wrapper
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return inner_fn(self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 1610, in CALL_FUNCTION
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.call_function(fn, args, {})
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 838, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 400, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return super().call_function(tx, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 339, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return super().call_function(tx, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 844, in inline_user_function_return
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 3000, in inline_call
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return cls.inline_call_(parent, func, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 3128, in inline_call_
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] tracer.run()
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 991, in run
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] while self.step():
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 903, in step
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.dispatch_table[inst.opcode](self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 616, in wrapper
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return inner_fn(self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 1610, in CALL_FUNCTION
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.call_function(fn, args, {})
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 838, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 400, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return super().call_function(tx, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 339, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return super().call_function(tx, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 844, in inline_user_function_return
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 3000, in inline_call
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return cls.inline_call_(parent, func, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 3128, in inline_call_
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] tracer.run()
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 991, in run
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] while self.step():
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 903, in step
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.dispatch_table[inst.opcode](self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 616, in wrapper
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return inner_fn(self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 1610, in CALL_FUNCTION
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.call_function(fn, args, {})
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 838, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/misc.py", line 1038, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return self.obj.call_method(tx, self.name, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/misc.py", line 781, in call_method
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return self.call_apply(tx, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/misc.py", line 730, in call_apply
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return variables.UserFunctionVariable(fn, source=source).call_function(
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 339, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return super().call_function(tx, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/functions.py", line 111, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 844, in inline_user_function_return
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 3000, in inline_call
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return cls.inline_call_(parent, func, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 3128, in inline_call_
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] tracer.run()
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 991, in run
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] while self.step():
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 903, in step
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.dispatch_table[inst.opcode](self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 616, in wrapper
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return inner_fn(self, inst)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 1610, in CALL_FUNCTION
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.call_function(fn, args, {})
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/symbolic_convert.py", line 838, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/lazy.py", line 161, in realize_and_forward
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return getattr(self.realize(), name)(*args, **kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/user_defined.py", line 926, in call_function
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return self.call_method(tx, "__call__", args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/user_defined.py", line 794, in call_method
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] return super().call_method(tx, name, args, kwargs)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/variables/base.py", line 343, in call_method
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] unimplemented(f"call_method {self} {name} {args} {kwargs}")
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] File "/data/users/shuaiyang/fbsource/buck-out/v2/gen/fbcode/c58c62e0a86c2716/scripts/jchae/torch_playground/__playground__/playground#link-tree/torch/_dynamo/exc.py", line 304, in unimplemented
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] raise Unsupported(msg, case_name=case_name)
[rank1]:V1009 23:16:08.229000 3761618 /data/users/shuaiyang/fbsource/fbcode/caffe2/torch/_dynamo/symbolic_convert.py:392] [0/0] [__graph_breaks] torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(instancemethod) __call__ [] {}
```
### Minified repro
```
import functools
import logging
import multiprocessing
import os
import unittest
from dataclasses import dataclass
from typing import Callable, List, Optional, Union
import torch
import torch.distributed as dist
import torchrec.distributed.comm_ops as comm_ops
from torch import nn
from torch.distributed.distributed_c10d import GroupMember
from torchrec.test_utils import get_free_port
torch._logging.set_logs(
aot_graphs=True,
graph_code=True,
graph_breaks=True,
trace_source=True,
recompiles=True,
inductor=logging.DEBUG,
dynamo=logging.DEBUG,
)
@dataclass
class _CompileConfig:
# backend is None means no compilation
backend: Optional[str] = "inductor"
fullgraph: bool = True
skip_sync_backward: bool = False
skip_compile_backward: bool = False
test_compiled_with_noncompiled_ranks: bool = False
def compile_config_to_fn_transform(
compile_config: Optional[_CompileConfig],
# pyre-ignore
) -> Callable:
if compile_config is None:
return lambda x: x
return functools.partial(
torch.compile,
backend=compile_config.backend,
fullgraph=compile_config.fullgraph,
dynamic=True,
)
def _test_alltoallv(
rank: int,
world_size: int,
backend: str,
compile_config: Optional[_CompileConfig] = None,
specify_pg: bool = False,
) -> None:
dist.init_process_group(rank=rank, world_size=world_size, backend=backend)
pg = GroupMember.WORLD
assert pg is not None
device = torch.device(f"cuda:{rank}")
torch.cuda.set_device(device)
B_global = 2
D0 = 8
D1 = 9
input_embedding0 = torch.rand(
(B_global, D0),
device=device,
requires_grad=True,
)
input_embedding0 = torch.tensor(
[[1, 2, 3, 4, 5, 6, 7, 8], [9, 10, 11, 12, 13, 14, 15, 16]],
device=device,
requires_grad=True,
dtype=torch.float32,
)
input_embedding1 = torch.rand(
(B_global, D1),
device=device,
requires_grad=True,
)
input_embedding1 = torch.tensor(
[[11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28]],
device=device,
requires_grad=True,
dtype=torch.float32,
)
input_embeddings = [input_embedding0, input_embedding1]
out_split = [17, 17]
# pyre-ignore
def fn(*args, **kwargs):
awaitable = comm_ops.alltoallv(
*args, **kwargs
) # the type of awaitable is torchrec.distributed.comm_ops.Request
return awaitable
@torch.compile(backend="inductor")
def wait_fn(awaitable):
res = awaitable.wait()
res1 = res[0] + res[1]
return res1
# fn_transform = compile_config_to_fn_transform(compile_config)
with unittest.mock.patch(
"torch._dynamo.config.skip_torchrec",
False,
):
v_embs_out_awaitable = fn(
input_embeddings, out_split=out_split, group=pg if specify_pg else None
)
v_embs_out = wait_fn(v_embs_out_awaitable)
print(f"bbbbbbbbbbbbbbbbbbbb {(v_embs_out)}")
torch.sum(v_embs_out)
# res = torch.cat(v_embs_out, dim=1).cpu()
# assert tuple(res.size()) == (1, 34)
dist.destroy_process_group()
def test_alltoallv(
# specify_pg: bool,
# test_compiled_with_noncompiled_ranks: bool,
) -> None:
_run_multi_process_test(
world_size=2,
backend="nccl",
# pyre-ignore [6]
callable=_test_alltoallv,
# compile_config=_CompileConfig(
# test_compiled_with_noncompiled_ranks=test_compiled_with_noncompiled_ranks
# ),
compile_config=None,
specify_pg=None,
)
def _run_multi_process_test(
world_size: int,
backend: str,
callable: Callable[[], None],
# pyre-ignore
*args,
# pyre-ignore
**kwargs,
) -> None:
processes = []
ctx = multiprocessing.get_context("spawn")
for rank in range(world_size):
p = ctx.Process(
target=callable,
args=(
rank,
world_size,
backend,
*args,
),
kwargs=kwargs,
)
p.start()
processes.append(p)
for p in processes:
p.join()
assert p.exitcode == 0
if __name__ == "__main__":
os.environ["MASTER_ADDR"] = str("localhost")
os.environ["MASTER_PORT"] = str(get_free_port())
os.environ["GLOO_DEVICE_TRANSPORT"] = "TCP"
os.environ["NCCL_SOCKET_IFNAME"] = "lo"
test_alltoallv()
```
### Versions
nightly
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Critical |
2,580,111,537 | deno | Deno doesnt load file when importing jsr:x module | deno 2.0.0 (stable, release, aarch64-unknown-linux-gnu)
v8 12.9.202.13-rusty
typescript 5.6.2
https://github.com/MARCROCK22/deno-bug
please see seyfert.config.mjs and src/main.ts files.
For some reason, if a module imports an file that imports a jsr module, it doesnt work
I dont get any error, process doesnt even stops (it just stays alive?) | needs investigation,node compat,node resolution | low | Critical |
2,580,111,729 | opencv | `_InputArray` and its derivatives need a clean-up | ### System Information
All systems.
### Detailed description
`_InputArray` and its derivatives have different behaviour based on what kind of "container" they hold. The code makes many mistakes about how C++'s memory model works, which ultimately leads to tcmalloc crashes when testing a `std::vector` optimisation that I've been working on for libc++.
For example, the following snippet simultaneously treats `obj` as `std::vector<uchar>*` and `std::vector<Vec2b>*` when `esz == 2`, which is impossible: `obj` can only be one of these types at a time.
```cpp
CV_Assert(!fixedSize() || len == ((std::vector<uchar>*)v)->size() / esz);
switch( esz )
{
case 1:
((std::vector<uchar>*)v)->resize(len);
break;
case 2:
((std::vector<Vec2b>*)v)->resize(len);
break;
```
Further, `_InputArray` can hold vectors of vectors of many types, but this section only seems to handle `vector<vector<uchar>>`.
-----
The crashes that I see are because OpenCV is erasing `vector<T>` to `vector<uchar>`. Up until now, `std::vector` has been implemented using three pointers (one that points to the beginning of the buffer, one that points to where the next element should be inserted, and one that acts as a sentinel for reallocation). This implementation stores where the allocated memory is starts and ends, allowing OpenCV's type punning to accidentally work.
I'm working on a patch to `std::vector` that redesigns it to use one pointer and two integers (one for the size and capacity, respectively). This means that the end boundary needs to be computed by `std::vector` during deallocation and since we're working `uchar`---which is smaller than the original type---the boundary calculation is substantially smaller than it should be. tcmalloc notices this, and forces a crash.
@zygoloid suggested having `_InputArray` store a pointer to a global object that dispatches to the member functions that implement the behaviour with the correct types. E.g.
```cpp
struct _InputArrayOps {
virtual Mat getMat_(void* data, int idx) = 0;
// ... other operations ...
};
// _InputArray operations for const std::vector<T>&
template<typename T> struct _InputArrayOps_vector : _InputArrayOps {
Mat getMat_(void* data, int idx) override {
CV_Assert( idx < 0 );
std::vector<T>& v = *static_cast<std::vector<T>*>(data);
return !v.empty() ? Mat(v.size(), traits::Type<T>::value + ACCESS_READ, (void*)v.data()) : Mat();
}
// ... other operations ...
};
template<typename T> constexpr inline _InputArrayOps_vector<T> _InputArrayOps_vector_instance;
// ... other supported types ...
class _InputArray {
public:
// ...
template<typename T> _InputArray(const std::vector<_Tp>& vec)
: data(&vec), ops(&_InputArrayOps_vector_instance<T>) {}
// ...
Mat getMat_(int idx = -1) const { return ops->getMat_(data, idx); }
// ...
private:
void *data;
_InputArrayOps *ops;
};
```
I'm working on a prototype to see how this works with my new vector representation.
### Steps to reproduce
I can't provide steps to repro just yet, but will work with you until we can get something.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: core,priority: high,RFC | high | Critical |
2,580,155,189 | electron | [Bug]: Desktop capture for Retina (HiDPI) screens uses 1x resolution | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.2.0
### What operating system(s) are you using?
macOS
### Operating System Version
Sonoma 14.5
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
When using `getUserMedia` or `getDisplayMedia` to capture a Retina screen on macOS, the image or video should have the screen's full physical resolution.
### Actual Behavior
In practice, I'm getting image/video data with the screen's "logical" resolution, not its full resolution. For example, my current screen has a 3840 x 2160 resolution, and I'm using 2x scale mode, so the logical resolution is 1920 x 1080.
When I use `getUserMedia` or `getDisplayMedia` to capture the screen using a `<video>` tag or the `ImageCapture` class, I get a video or image that's 1920 x 1080 and visibly downscaled. Using `ImageCapture` in Chrome, I get a video/image that's the correct resolution.
When I was experimenting with this, I noticed that the video stream metadata (accessed from JS) does have the correct width and height properties, but attaching it to a `<video>` element gives it incorrect `videoWidth` and `videoHeight` values.
### Testcase Gist URL
https://gist.github.com/rf-figma/7bf241daa46134583b82175c6e93b8e9
### Additional Information
Note that I had trouble running this test case inside Fiddle because of OS permissions, so I ended up just running it from the command line.
Also, here's a CodePen you can run in Chrome to see the browser behavior: https://codepen.io/rf-figma/pen/vYoyMRg | platform/macOS,has-repro-gist,component/desktopcapturer,32-x-y | low | Critical |
2,580,189,472 | godot | Android Installation causes Not Responding when running in single window mode | ### Tested versions
Tested in 4.3 Stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1650 (NVIDIA; 32.0.15.6109) - AMD Ryzen 3 3100 4-Core Processor (8 Threads)
### Issue description

This should explain the issue I've mentioned
### Steps to reproduce
Step 1. Set window mode to single window mode
Step 2. Connect Android Device
Step 3. Click the Remote Debug Run Button
Step 4. Select Android Device and wait for it to install (Wireless Debugging)
### Minimal reproduction project (MRP)
Simple project with android export template installed and android build option selected | bug,platform:android,topic:porting,needs testing | low | Critical |
2,580,199,086 | godot | VehicleWheel3D Sinks Through Gaps Smaller Than Itself & VehicleBody3D Slides on Slopes | ### Tested versions
4.3-stable
I'm pretty sure this is the case across all Godot 4 versions. Not sure about Godot 3.
### System information
System info not needed.
### Issue description
When you drive a VehicleBody3D through a gap smaller than its wheels, they sink down and get stuck. Sometimes forever, and then you have to restart the game.
When you drive a VehicleBody3D to a slope, then brake, it's supposed to stay like that. In this case however, it slides down until it's on a perfectly flat surface, then loses momentum and stops.
I've thought of reporting these separately but since these nodes are related, I think it's fine.
Changing physics engine to Jolt or Rapier does not fix these issues.
### Steps to reproduce
1- Download the project and open it up.
2- Press play. You can drive the vehicle with WASD or Arrow keys and look around with the mouse.
3- Drive towards the higher slope right in front of you. Then stop driving. Vehicle slides down, which is not realistic. You can hold Space key while doing it to apply the brakes and watch it slowly slide down.
4- Drive towards the lower slopes and drive slowly over them. I don't know how to describe what will happen.
### Minimal reproduction project (MRP)
[vehiclebody_sliding_down.zip](https://github.com/user-attachments/files/17336343/vehiclebody_sliding_down.zip)
| discussion,topic:physics,topic:3d | low | Major |
2,580,206,613 | langchain | SqliteCache fails with ChatOpenAI.with_structured_output(method="json_schema") | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from dotenv import load_dotenv
_ = load_dotenv()
class Limerick(BaseModel):
limerick: str
def main():
set_llm_cache(SQLiteCache())
llm = ChatOpenAI(model_name="gpt-4o-2024-08-06")
structured_llm = llm.with_structured_output(Limerick, method="json_schema")
for _ in range(2):
result = structured_llm.invoke([SystemMessage(content="Write a limerick")])
print(result)
if __name__ == "__main__":
main()
```
### Error Message and Stack Trace (if applicable)
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3022, in invoke
input = context.run(step.invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 818, in _generate_with_cache
return ChatResult(generations=cache_val)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatResult
generations.0
Input should be a valid dictionary or instance of ChatGeneration [type=model_type, input_value=Generation(text='{"lc": 1...id_tool_calls": []}}}}'), input_type=Generation]
For further information visit https://errors.pydantic.dev/2.9/v/model_type
### Description
Trying to use ChatOpenAI(model_name="gpt-4o-2024-08-06") with "json_schema". issue is not seen with "json_mode"
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Mon Aug 12 08:48:58 UTC 2024
> Python Version: 3.12.7 (main, Oct 1 2024, 22:28:49) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.133
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.35
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.9
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
root@77355f84363e:/usr/src/app#
| ๐ค:bug,investigate | low | Critical |
2,580,207,863 | vscode | VS Code is ignoring the environment setting for node js --max-old-space-size | VS Code is ignoring the environment setting for node js --max-old-space-size
<!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2
- Node.js: v20.18.0
- OS Version:
Edition Windows 11 Pro
Version 23H2
Installed on โ9/โ10/โ2023
OS build 22631.4317
Experience Windows Feature Experience Pack 1000.22700.1041.0
Steps to Reproduce:
1. Setup node options for user/system

2. Open cmd to validate

3. Open ps to validate

4. Open VS Code to validate - value is ignored/overwritten with something else

Thanks,
Regards,
Evgeny
| bug,help wanted,good first issue,windows,terminal-process | low | Critical |
2,580,213,697 | opencv | Convert Pixel Function (including a prototype) | ### Describe the feature and motivation
Every now and then i want to convert single pixel values. that includes channels, depth, colorspace, scaling and container type (e.g.: VecN and Scalar). at the moment that is not possible with opencv using a single function. i would like to have such a function.
Following is a probably not so perfect and still incomplete prototype of such a function. What do you think is missing besides grayscale support?
### Examples:
```C++
//HLS (0-255) to RGB
cv::Vec3b rgb = convert_pix<cv::COLOR_HLS2RGB_FULL>(cv::Vec3b(hue, 128, 255));
//channel manipulation and scaling
cv::Vec4b rgba = convert_pix<cv::COLOR_BGR2RGBA>(cv::Vec3b(1.0, 0, 0.5), 255.0);
//binarize and convert from Scalar to Vec4b
cv::Vec4b binarized = convert_pix<-1, Scalar, cv::Vec4b, true>(scalar, 1.0/255.0);
```
### Code:
```c++
template<typename T>
constexpr int matrix_depth() {
if constexpr(std::is_same_v<T, uchar>) {
return CV_8U;
} else if constexpr(std::is_same_v<T, short>){
return CV_16S;
} else if constexpr(std::is_same_v<T, ushort>){
return CV_16U;
} else if constexpr(std::is_same_v<T, int>){
return CV_32S;
} else if constexpr(std::is_same_v<T, float>){
return CV_32F;
} else if constexpr(std::is_same_v<T, double>){
return CV_64F;
} else if constexpr(true) {
static_assert(false, "Type not supported for operation.");
return 0;
}
}
template<bool Tround> double doRound(double t) {
if constexpr(Tround) {
return std::round(t);
} else {
return t;
}
}
/*!
* Convenience function to color convert from all Vec_ and Scalar_ variants
*/
template<int Tcode = -1, typename Tsrc, typename Tdst = Vec<typename Tsrc::value_type, Tsrc::channels>, bool Tround = std::is_floating_point_v<typename Tsrc::value_type> && std::is_integral_v<typename Tdst::value_type>>
Tdst convert_pix(const Tsrc &src, double alpha = 1.0, double beta = 0.0) {
constexpr int srcCn = Tsrc::channels;
constexpr int dstCn = Tdst::channels;
using srcv_t = typename Tsrc::value_type;
using dstv_t = typename Tdst::value_type;
using src_internal_t = Vec<srcv_t, srcCn>;
using intermediate_t = Vec<srcv_t, dstCn>;
using dst_internal_t = Vec<dstv_t, dstCn>;
static_assert((srcCn == 3 || srcCn == 4) && (dstCn == 3 || dstCn == 4), "Only 3 or 4 (src/dst) channels supported");
constexpr int srcType = CV_MAKETYPE(
matrix_depth<typename src_internal_t::value_type>(),
src_internal_t::channels);
constexpr int intermediateType = CV_MAKETYPE(
matrix_depth<typename src_internal_t::value_type>(), dstCn);
constexpr int dstType = CV_MAKETYPE(
matrix_depth<typename dst_internal_t::value_type>(), dstCn);
std::array<src_internal_t, 1> srcArr;
if constexpr (srcCn == 3) {
srcArr[0] = src_internal_t(src[0], src[1], src[2]);
} else {
srcArr[0] = src_internal_t(src[0], src[1], src[2], src[3]);
}
cv::Mat intermediateMat(cv::Size(1, 1), intermediateType);
if constexpr (dstCn == srcCn) {
intermediateMat = srcArr[0];
} else if constexpr (srcCn == 3) {
intermediateMat = intermediate_t(srcArr[0][0], srcArr[0][1],
srcArr[0][2]);
} else if constexpr (srcCn == 4) {
intermediateMat = intermediate_t(srcArr[0][0], srcArr[0][1],
srcArr[0][2], srcArr[0][3]);
}
if constexpr (Tcode >= 0) {
cvtColor(srcArr, intermediateMat, Tcode);
}
std::array<dst_internal_t, 1> dstArr;
if constexpr (!std::is_same<srcv_t, dstv_t>::value) {
//will just copy if types match
if constexpr (dstCn == srcCn) {
intermediateMat.convertTo(dstArr, dstType);
} else if constexpr (dstCn == 3) {
cvtColor(intermediateMat, intermediateMat, cv::COLOR_BGRA2BGR);
intermediateMat.convertTo(dstArr, dstType);
} else if constexpr (dstCn == 4) {
cvtColor(intermediateMat, intermediateMat, cv::COLOR_BGR2BGRA);
intermediateMat.convertTo(dstArr, dstType);
}
} else {
if constexpr (dstCn == srcCn) {
dstArr[0] = intermediateMat.at<src_internal_t>(0.0);
} else if constexpr (dstCn == 3) {
auto im = intermediateMat.at<src_internal_t>(0.0);
dstArr[0] = dst_internal_t(im[0], im[1], im[2]);
} else if constexpr (dstCn == 4) {
auto im = intermediateMat.at<src_internal_t>(0.0);
if (intermediateMat.depth() == CV_32F
|| intermediateMat.depth() == CV_64F) {
dstArr[0] = dst_internal_t(im[0], im[1], im[2], 255.0);
} else {
dstv_t a = std::numeric_limits<dstv_t>::max();
dstArr[0] = dst_internal_t(im[0], im[1], im[2], a);
}
}
}
Tdst dst;
if constexpr (dstCn == 3) {
dst = Tdst(dstArr[0][0], dstArr[0][1], dstArr[0][2]);
} else if constexpr (dstCn == 4) {
dst = Tdst(dstArr[0][0], dstArr[0][1], dstArr[0][2], dstArr[0][3]);
}
if (alpha != 1.0) {
if constexpr (dstCn == 3) {
dst[0] = doRound<Tround>(dst[0] * alpha);
dst[1] = doRound<Tround>(dst[1] * alpha);
dst[2] = doRound<Tround>(dst[2] * alpha);
} else if constexpr (dstCn == 4) {
dst[0] = doRound<Tround>(dst[0] * alpha);
dst[1] = doRound<Tround>(dst[1] * alpha);
dst[2] = doRound<Tround>(dst[2] * alpha);
dst[3] = doRound<Tround>(dst[3] * alpha);
}
}
if (beta != 0.0) {
if constexpr (dstCn == 3) {
dst[0] = doRound<Tround>(dst[0] + beta);
dst[1] = doRound<Tround>(dst[1] + beta);
dst[2] = doRound<Tround>(dst[2] + beta);
} else if constexpr (dstCn == 4) {
dst[0] = doRound<Tround>(dst[0] + beta);
dst[1] = doRound<Tround>(dst[1] + beta);
dst[2] = doRound<Tround>(dst[2] + beta);
dst[3] = doRound<Tround>(dst[3] + beta);
}
}
return dst;
}
```
### Additional context
_No response_ | feature | low | Major |
2,580,275,725 | PowerToys | Capslock rebind | ### Description of the new feature / enhancement
Caps lock is kind of useless, being able to replace it with a shortcut could provide useful.
### Scenario when this would be used?
E.g. instead of alt + tab, using the caps-lock key, to improve productivity
### Supporting information
Many other software contain the ability to rebind keys, and have proved rather helpful and popular. | Needs-Triage | low | Minor |
2,580,348,371 | godot | Android editor crashes when running multiple instances of a project | ### Tested versions
Tested Versions:
Current Version: Godot 4.4.0 dev3 (commit hash: [insert commit hash here if available])
Reproducibility:
The issue is consistently reproducible in the current version (4.4.0 dev3).
Additional Versions Tested:
1. Godot 4.3.stable:
Issue: Reproducible
2. Godot 4.2.stable:
Issue: Reproducible
3. Godot 4.1.stable:
Issue: Reproducible
4. Godot 4.2.dev1:
Issue: Reproducible
5. Godot 4.2.dev2:
Issue: Reproducible
Summary:
The bug is reproducible across all tested versions. This indicates that it is not a regression, as the issue has been present in these versions. Identifying the root cause will be essential for resolving the problem
### System information
System Information: Operating System: Android 11 Device Model: Infinix Hot 11 Play CPU Model: MediaTek Helio G35 CPU Architecture: ARM64 GPU Model: PowerVR GE8320 RAM: 3 GB / 4 GB Storage: 32 GB / 64 GB (expandable via microSD) Battery: 6000 mAh Rendering Backend: GLES3 (or specify GLES2, Vulkan, etc., as applicable) Godot Version: Godot 4.4.0 dev3
### Issue description
https://github.com/user-attachments/assets/56806949-6966-4992-9c26-abb6c3149a7d
I am facing a critical issue in my Godot project on Android. Whenever I open multiple instances of the application, the app closes automatically right after running the project. There is no error message or crash logโ the app just exits abruptly. It seems like the app is unable to handle more than one instance properly, leading to an immediate closure.
The issue does not appear to be related to any specific bug in the code, as the app works perfectly with a single instance. However, as soon as I try to run multiple instances, the app cuts off and closes without any warning.
This is a major problem because I am developing a multiplayer game, and running multiple instances is crucial for testing and gameplay. Without resolving this issue, it is impossible to proceed with the multiplayer functionality. I would appreciate it if this could be addressed and fixed as soon as possible.
### Steps to reproduce
1. Open the Godot project on an Android device.
2. Run the project and open a single instance of the app. The app works fine.
3. Now, attempt to open multiple instances of the app (either by launching it again or through a multitasking action on the Android device).
4. Notice that the app exits automatically without any error or crash log right after running the second instance.
If needed, I can provide a minimal reproduction project that demonstrates this issue. It includes basic scenes and the necessary setup to replicate the problem on Android. Simply run the project on an Android device, open multiple instances, and observe the app's behavior as described above.
### Minimal reproduction project (MRP)
I have created a minimal reproduction project that demonstrates the issue. The project is written in GDScript, as C# is not supported for Android development in this context.
The project includes only the necessary files to reproduce the issue, and I have ensured that the .godot folder is not included in the archive (the project.godot file is retained).
You can upload the ZIP archive of the project here (max 10 MB).
If the reproduction steps are not project dependent, you can specify that as well.
---
Feel free to add any specific instructions or details about how to use the project when you upload it!
| bug,platform:android,topic:editor,crash | low | Critical |
2,580,463,299 | pytorch | test/distributed/_tensor/experimental/test_tp_transform.py TensorParallelTest.test_tp_transform_e2e fail with functionalization V2 | python test/distributed/_tensor/experimental/test_tp_transform.py TensorParallelTest.test_tp_transform_e2e (With V2 enabled)
**what is going on ?**
we hit this error:
RuntimeError: Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()
the function in that case is
```
...
exported_program = torch.export.export(
File "/home/lsakka/pytorch/torch/export/__init__.py", line 366, in export
return _export(
File "/home/lsakka/pytorch/torch/export/_trace.py", line 1021, in wrapper
raise e
File "/home/lsakka/pytorch/torch/export/_trace.py", line 994, in wrapper
ep = fn(*args, **kwargs)
File "/home/lsakka/pytorch/torch/export/exported_program.py", line 116, in wrapper
return fn(*args, **kwargs)
File "/home/lsakka/pytorch/torch/export/_trace.py", line 1940, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/lsakka/pytorch/torch/export/_trace.py", line 1234, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/lsakka/pytorch/torch/export/_trace.py", line 1343, in _strict_export_lower_to_aten_ir
aten_export_artifact = lower_to_aten_callback(
File "/home/lsakka/pytorch/torch/export/_trace.py", line 641, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/home/lsakka/pytorch/torch/_functorch/aot_autograd.py", line 1262, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/home/lsakka/pytorch/torch/_functorch/aot_autograd.py", line 1497, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/home/lsakka/pytorch/torch/_functorch/aot_autograd.py", line 524, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/lsakka/pytorch/torch/_functorch/aot_autograd.py", line 762, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 111, in aot_dispatch_export
graph, _, _ = aot_dispatch_base_graph(
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 136, in aot_dispatch_base_graph
fw_module = _create_graph(
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 54, in _create_graph
fx_g = make_fx(
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 2147, in wrapped
return make_fx_tracer.trace(f, *args)
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 2085, in trace
return self._trace_inner(f, *args)
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 2056, in _trace_inner
t = dispatch_trace(
File "/home/lsakka/pytorch/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/home/lsakka/pytorch/torch/_dynamo/eval_frame.py", line 654, in _fn
return fn(*args, **kwargs)
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 1133, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 1652, in trace
res = super().trace(root, concrete_args)
File "/home/lsakka/pytorch/torch/_dynamo/eval_frame.py", line 654, in _fn
return fn(*args, **kwargs)
File "/home/lsakka/pytorch/torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 1188, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 413, in _functionalized_f_helper
f_outs = fn(*f_args)
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 78, in inner_fn
outs = fn(*args)
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/home/lsakka/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 863, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/home/lsakka/pytorch/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
File "/home/lsakka/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6494, in run_node
result = super().run_node(n)
File "/home/lsakka/pytorch/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/lsakka/pytorch/torch/fx/interpreter.py", line 275, in call_function
return target(*args, **kwargs)
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 1236, in __torch_function__
return func(*args, **kwargs)
File "/home/lsakka/pytorch/torch/fx/experimental/proxy_tensor.py", line 1274, in __torch_function__
return func(*args, **kwargs)
File "/home/lsakka/pytorch/torch/_subclasses/functional_tensor.py", line 225, in __torch_dispatch__
raise RuntimeError(
RuntimeError: Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()
While executing %ones_like : [num_users=1] = call_function[target=torch.ones_like](args = (%x_2,), kwargs = {})
Original traceback:
File "/home/lsakka/pytorch/test/distributed/_tensor/experimental/test_tp_transform.py", line 42, in forward
return x + torch.ones_like(x)
```
**the code that throws:**
```
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
unrecognized_types = [
t
for t in types
if t not in [torch.Tensor, torch._subclasses.FakeTensor, FunctionalTensor]
]
if unrecognized_types:
not_implemented_log.debug(
"FunctionalTensor unrecognized subclass(es): %s", unrecognized_types
)
return NotImplemented
if kwargs is None:
kwargs = {}
# assert False
print(func)
# FunctionalTensor needs to plumb all metadata requests to the inner tensor.
# In theory we don't have to do this - but if we want to service metadata requests here,
# we need to carefully make sure all metadata is accurate (including metadata mutations)
if func in FunctionalTensor.metadata_fns:
# All metadata accesses should be plumbed to the inner tensor, that way we don't have to worry
# about the problem of keeping metadata in sync between the wrapper and inner tensor.
# This also alleviates us from having to manually handle metadata mutations on the wrapper.
assert len(kwargs) == 0
if func in [
torch.ops.aten.is_strides_like_format.default,
torch.ops.aten.is_contiguous.memory_format,
]:
assert len(args) == 2 and isinstance(args[0], FunctionalTensor)
return func(torch._from_functional_tensor(args[0].elem), args[1])
assert len(args) == 1 and isinstance(args[0], FunctionalTensor)
return func(torch._from_functional_tensor(args[0].elem))
# Originally I tried to implement my subclass without giving it a torch_dispatch, but I gave up:
# - _make_wrapper_subclass requires a __torch_dispatch__
# - If we want to use _make_subclass(), we have a problem: the subclass will share a TensorImpl with the inner tensor,
# which is of type FunctionalTensorWrapper! We explicitly do not want our wrapper to be a FunctionalTensorWrapper.
# - If we use the default tensor.__new__(), we have another problem: it returns inner_tensor.alias(),
# which causes every subclass created above autograd to have autograd view metadata
# (in addition to also being a FunctionalTensorWrapper).
raise RuntimeError(
"Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()"
)
```
The input to that function is aten.detach which is not in FunctionalTensor.metadata_fns.
**Why does it happen only with V2?**
aten.detach is not part of the original graph. but the computation of the base tensor in inference mode end up making us tracing through this detach function? that is if i remove this code from the functional tensor constructor the problem is solved. seem like calling untyped_storage end up generating this detach?
```
if out.is_base_tensor():
out._inference_mode_base = None
# This assumes that the FunctionalTensor.elem does not change its storage after this point.
# Otherwise this would be invalid.
mode._storage_to_base[out.elem.untyped_storage()] = out
else:
out._inference_mode_base = mode._storage_to_base[
out.elem.untyped_storage()
]
assert out._inference_mode_base is not None
return out
```
I tried two things that solve the issue:
1. do not compute base tensor in inference mode if we are in export mode (the case for this test, **But** **idk if there could be a more general issue that can be hit even in non export mode and if that just avoids it**)
https://github.com/pytorch/pytorch/pull/137760
2. add detach to FunctionalTensor.metadata_fns. (**I have no idea if that is safe**) lol
3. both if 2 is safe? otherwise is it possible to run into this issue in non export mode!?
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @bdhirsh @ezyang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @zou3519 | oncall: distributed,triaged,module: functionalization,oncall: pt2,module: inductor,module: pt2-dispatcher,module: reinplacing | low | Critical |
2,580,464,523 | opencv | Test_ONNX_layers.ResizeUnfusedTwoInputs test fails with OpenVINO after the new DNN engine integration | ### System Information
Platform: Any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
The failed model:
```
testONNXModels("upsample_unfused_two_inputs_opset11_torch1.4", npy, 0, 0, false, true, 2);
```
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,580,492,553 | next.js | i18n configuration causes 500 error when certain malformed URLs are visited | ### Link to the code that reproduces this issue
https://github.com/Parker-Echo/nextjs-reproduction-app-bug
### To Reproduce
1. Start the server `next dev`.
2. Run `curl -k http://localhost:3000/\\\\\\%20../%20../%20../%20../%20../%20../foobar`
3. Observe it returns a 500 error and logs the following
```
Failed to handle request for /\\\%20../%20../%20../%20../%20../%20../foobar
TypeError: Invalid URL
at new URL (node:internal/url:775:36)
at parseRelativeUrl (my-app/node_modules/next/dist/shared/lib/router/utils/parse-relative-url.js:16:68)
at parseUrl (my-app/node_modules/next/dist/shared/lib/router/utils/parse-url.js:15:55)
at requestHandlerImpl (my-app/node_modules/next/dist/server/lib/router-server.js:115:54)
at Server.requestListener (my-app/node_modules/next/dist/server/lib/start-server.js:141:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 'ERR_INVALID_URL',
input: '/\\\\\\%20../%20../%20../%20../%20../%20../etc/passwd/',
base: 'http://n/'
}
```
### Current vs. Expected behavior
Following the steps from the previous section, I expect a redirect or a 404 error.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.9.0
npm: 10.1.0
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.183 // Latest available version is detected (15.0.0-canary.183).
eslint-config-next: N/A
react: 19.0.0-rc-2d16326d-20240930
react-dom: 19.0.0-rc-2d16326d-20240930
typescript: 5.3.3
Next.js Config:
```
### Which area(s) are affected? (Select all that apply)
Not sure, Internationalization (i18n), Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Other (Deployed)
### Additional context
I don't repro if I access the URL via postman or the browser, possibly because of different encodings. I reproduced against 14.2 as well. Only happens with i18n configured.
Forum post: https://nextjs-forum.com/post/1293815757494288404 | bug,Internationalization (i18n),Runtime | low | Critical |
2,580,524,384 | ollama | [Feature request] Support external image URL for Multi Modal Models / Vision LLMs | 1. download the image
2. load the image
3. run inference on image ๐
4. profit ๐ค
This is especially useful if you're running ollama on a server and you can't just drag and drop an image
_Ideally_
```
$ ollama run minicpm-v --verbose
>>> https://farmhouseguide.com/wp-content/uploads/2021/08/group-of-llama-ee220513.jpg
Added image './group-of-llama-ee220513.jpg'
The image shows a group of lamas gathered around a water source in an outdoor, mountainous
landscape. There are six animals visible: four white llamas with thick woolly coats and two
reddish-brown guanacos or vicuรฑas. The setting appears to be high-altitude terrain with sparse
vegetation and rocky ground.
```

| feature request | low | Minor |
2,580,526,343 | kubernetes | delete PVC created by statefulset on pod eviction | ### What would you like to be added?
Similarly to the [persistentVolumeClaimRetentionPolicy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention), would be nice to have the possibility to delete the PVC on pod eviction.
### Why is this needed?
In case of local storage provisioners ( like lvm, or static provisioner ), where the PV is tied to nodes, on node roll-over it would avoid the need of a manual deletion of the PVC in order to let the newly scheduled pod to start. | sig/storage,kind/feature,sig/apps,needs-triage | low | Major |
2,580,529,077 | pytorch | DISABLED test_ddp_profiling_execution_trace (__main__.TestDistBackendWithSpawn) | This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2Ftest_distributed_spawn.py%3A%3ATestDistBackendWithSpawn%3A%3Atest_ddp_profiling_execution_trace%22%5D)).
This test fails when distributed tries to update CI infra:
https://github.com/pytorch/pytorch/pull/137161
Disabling it.
Plus, the test seems to emphasize on profiler result. Consider moving it over in longer term.
And, the test seems to require exact match of log result -- which is something we don't guarantee.
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,skipped | low | Critical |
2,580,533,926 | deno | `Worker` constructor should synchronously fetch URL content if it's a blob/object URL (i.e. created with `URL.createObjectURL`) | Version: Deno 2.0.0
This works in Chrome and Firefox, but intermittently fails in Deno due to the synchronous `revokeObjectURL` after the `Worker` construction:
```js
let workers = [];
for(let i = 0; i < 10; i++) {
let workerBlob = new Blob([`
self.onmessage = async function(e) {
console.log('Worker got message');
}
`], {type:"application/javascript"});
let workerURL = URL.createObjectURL(workerBlob);
let worker = new Worker(workerURL, {type:"module"});
URL.revokeObjectURL(workerURL);
worker.postMessage('');
workers.push(worker);
}
```
In Deno, it results in logs like:
```
Worker got message
error: Uncaught (in worker "") Module not found "blob:null/8ec4f15b-3aff-4015-baf7-6c69004e0126".
error: Uncaught (in worker "") Module not found "blob:null/34db8840-451e-4ab3-b052-53b241e4d3fc".
Worker got message
Worker got message
Worker got message
Worker got message
Worker got message
Worker got message
error: Uncaught (in worker "") Module not found "blob:null/ca1de874-40cd-47a7-9e4e-f4d8de62d46e".
``` | bug,web | low | Critical |
2,580,542,902 | pytorch | [CPU]Tracking complex datatype issues | This issue tracks issues related to complex data types on CPU.
### Issues:
* torch.linalg.norm & torch.norm: https://github.com/pytorch/pytorch/issues/132634
* torch.acos: https://github.com/pytorch/pytorch/issues/134487
* torch.nn.functional.normalize: https://github.com/pytorch/pytorch/issues/135428
* torch.sigmoid: https://github.com/pytorch/pytorch/issues/135777
* torch.lobpcg: https://github.com/pytorch/pytorch/issues/135860
* torch.exp: https://github.com/pytorch/pytorch/issues/136063
* torch.asin: https://github.com/pytorch/pytorch/issues/138327
* https://github.com/pytorch/pytorch/issues/141487
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames | module: cpu,triaged,module: complex | low | Minor |
2,580,626,854 | kubernetes | [Flaking Test] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive] [NodeFeature:Eviction] should cause PIDPressure should eventually evict all of the correct pods | ### Which jobs are flaking?
- ci-crio-cgroupv2-node-e2e-eviction
- ci-crio-cgroupv1-node-e2e-eviction
also
- pull-crio-cgroupv1-node-e2e-eviction
- pull-crio-cgroupv2-node-e2e-eviction
https://storage.googleapis.com/k8s-triage/index.html?test=PodAndContainerStatsFromCRI
### Which tests are flaking?
when we run containers with PodAndContainerStatsFromCRI=true or false.
- It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive] [NodeFeature:Eviction] when we run containers with PodAndContainerStatsFromCRI=true that should cause PIDPressure should eventually evict all of the correct pods
- E2eNode Suite.[It] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive] [NodeFeature:Eviction] when we run containers with PodAndContainerStatsFromCRI=false that should cause PIDPressure should eventually evict all of the correct pods
### Since when has it been flaking?
years ago: https://github.com/kubernetes/kubernetes/issues/107804
### Testgrid link
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction and https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-eviction
### Reason for failure (if possible)
```
STEP: checking eviction ordering and ensuring important pods don't fail - k8s.io/kubernetes/test/e2e_node/eviction_test.go:770 @ 10/10/24 16:06:49.933
[FAILED] pod fork-bomb-container-with-high-priority-pod failed; expected Status.Reason to be Evicted, but got
Expected
<string>:
to equal
<string>: Evicted
In [It] at: k8s.io/kubernetes/test/e2e_node/eviction_test.go:803 @ 10/10/24 16:06:49.934
< Exit [It] should eventually evict all of the correct pods - k8s.io/kubernetes/test/e2e_node/eviction_test.go:628 @ 10/10/24 16:06:49.934 (56.174s)
> Enter [AfterEach] TOP-LEVEL - k8s.io/kubernetes/test/e2e_node/eviction_test.go:690 @ 10/10/24 16:06:49.934
STEP: deleting pods - k8s.io/kubernetes/test/e2e_node/eviction_test.go:701 @ 10/10/24 16:06:49.934
```
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig node | sig/node,kind/flake,priority/important-longterm,triage/accepted | low | Critical |
2,580,641,678 | yt-dlp | [site request] https://www.boomplay.com | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
*
### Example URLs
https://www.boomplay.com/songs/165481965
https://www.boomplay.com/video/1154892
https://www.boomplay.com/playlists/33792494?from=home
https://www.boomplay.com/artists/9554405?from=home
### Provide a description that is worded well enough to be understood
genric extractors do not work
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://www.boomplay.com/songs/165481965', '-vU']
[debug] User config "/home/solomoncyj/.yt-dlp/config.txt": ['-o', './%(playlist_title|)s/%(title)s.%(ext)s', '-N', '50', '-S', 'vcodec:vp9', '--embed-metadata', '--sub-langs', 'all,-live_chat', '--embed-subs', '--live-from-start']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.07 from yt-dlp/yt-dlp [1a176d874]
[debug] Lazy loading extractors is disabled
[debug] Python 3.13.0 (CPython x86_64 64bit) - Linux-6.11.2-300.fc41.x86_64-x86_64-with-glibc2.40 (OpenSSL 3.2.2 4 Jun 2024, glibc 2.40)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.07 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.07 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.boomplay.com/songs/165481965
[generic] 165481965: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 165481965: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.boomplay.com/songs/165481965
Traceback (most recent call last):
File "/home/solomoncyj/.local/lib/python3.13/site-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
File "/home/solomoncyj/.local/lib/python3.13/site-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
File "/home/solomoncyj/.local/lib/python3.13/site-packages/yt_dlp/extractor/common.py", line 741, in extract
ie_result = self._real_extract(url)
File "/home/solomoncyj/.local/lib/python3.13/site-packages/yt_dlp/extractor/generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.boomplay.com/songs/165481965
```
| site-enhancement,triage | low | Critical |
2,580,680,770 | godot | Opening a native FileDialog while a Popup window is active makes FileDialog unresponsive on Windows locking the entire application | ### Tested versions
4.3.stable
### System information
Windows 11 / macOS 15.0.1
### Issue description
If you have a separate native window Popup open, this by default seems to consume all input events until you close it, to work for dialogs and such. However, an issue with this is that if you open a native FileDialog while such a Popup window is open the panel seems to prevent all mouse events from reaching the file dialog as well. This makes the file dialog unresponsive (sometimes after the first click) to all clicks in the dialog on Windows. So you can't Open/Save, Cancel or close it. And as the file picker dialog is a modal dialog this effectively locks down the entire application and you need to force quit it.
On macOS the FileDialog does not become unresponsive in the same way but when you dismiss it you get an error
```
E 0:00:03:0885 window_move_to_foreground: Condition "!windows.has(p_window)" is true.
<C++ Source> platform/macos/display_server_macos.mm:2563 @ window_move_to_foreground()
```
I think this might be related to Godot trying to return the focus to the popup window that might have been closed by the native file dialog opening?
I've worked around it by temporarily hiding the Popup by setting `visible = false` before the FileDialog is opened and then showing it again when the FileDialog is closed, which works fine. But it would be nice if it could remain open and just not prevent events from getting to the FileDialog on windows.
### Steps to reproduce
- Have a separate native Popup window open (not embedded window)
- Open a native FileDialog
- Try to interact with the FileDialog on Windows
### Minimal reproduction project (MRP)
[unresponsive-filedialog.zip](https://github.com/user-attachments/files/17339303/unresponsive-filedialog.zip)
| bug,topic:porting,topic:gui | low | Critical |
2,580,685,993 | ant-design | Popoverๅฏไปฅ่ฎพ็ฝฎๅผนๅบๅจ็ป | ### What problem does this feature solve?
Popover ไนๅๅฏไปฅ้่ฟ transitionName="ant-slide-up"ไฟฎๆนๅผนๅบๅจ็ป๏ผไฝๆฏ็ฐๅจๅฅฝๅไธ็ๆไบ๏ผapiๆๆกฃไธญไนๅนถๆฒกๆๆๅฐ่ฟไธชapi๏ผ้่ฆไธไธช็จณๅฎ็apiๆฅ่ฟ่ก่ฎพ็ฝฎ
### What does the proposed API look like?
้่ฆไธไธช็จณๅฎ็apiๆฅ่ฟ่ก่ฎพ็ฝฎ
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ก Feature Request,Inactive | low | Minor |
2,580,687,809 | electron | [Bug]: Handle count increases after switching app focus in multi-monitor setup | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10 19045
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
Handle count does not increase after switching focus between running apps
### Actual Behavior
Handle count does increase after switching focus back to electron app
### Testcase Gist URL
(https://gist.github.com/3b4d6eb51efc49cc6d0ac8554b5070bc)
### Additional Information
This only happens in a setup where more than one monitor is connected to the PC:
Each time the electron app looses focus (e.g. by switching to another app in the WIndows taskbar) and it is brought back to focus (e.g. by clicking on the electron app in the taskbar again) the handle count increases. It is a windows registry handle that as far as I've understood manages the displays on windows. If you only have one monitor connected the registry entries and thus the handles are closed, but as soon as you connect a second monitor these handles stay open and new ones are added.

I can't add a fiddle right now, but will do so later. However, it is reproducable with just the basic tutorial code.

I've also tried it with an older electron version (29.x) but it wasn't working either.
| platform/windows,bug :beetle:,has-repro-gist,32-x-y,33-x-y | low | Critical |
2,580,731,116 | terminal | Changing to a custom font doesn't work reliably | ### Windows Terminal version
1.21.2701.0
### Windows build number
10.0.22631.4317
### Other Software
_No response_
### Steps to reproduce
* Install custom font like "Fira Mono" or "IBM Plex Mono"
* Try to configure it in Windows Terminal
### Expected Behavior
I expect the font to be changed
### Actual Behavior
Windows Temrinal claims in cannot find the font when pressing the save button. This also happens if I change the font directly in the JSON, as soon as the config file is saved Terminal says it cannot find the font.
Although, after I trigger something (I did not find out yet what it is, maybe it's just randomly, it suddenly works until i restart terminal again)
| Issue-Bug,Needs-Tag-Fix | low | Major |
2,580,753,196 | deno | deno install deep/peer dependency types version collision error during build | Version: Deno 2.0.0
I am testing Deno as a pm replacement in our repo currently, since v2 seems pretty promising (also benchmarked a build). Sadly during the build it breaks with errors reporting type version collisions where a peer/deep dependency has a different version. This is a little bit related to #17286 whereas I want to report a type issue.
<details>
<summary>Our package.json stripped</summary>
```json
{
"name": "",
"version": "0.0.0",
"description": "",
"license": "UNLICENSED",
"private": true,
"type": "module",
"dependencies": {
"@angular/animations": "19.0.0-next.9",
"@angular/common": "19.0.0-next.9",
"@angular/core": "19.0.0-next.9",
"@angular/forms": "19.0.0-next.9",
"@angular/platform-browser": "19.0.0-next.9",
"@angular/platform-browser-dynamic": "19.0.0-next.9",
"@angular/router": "19.0.0-next.9",
"@angular/service-worker": "19.0.0-next.9",
"@capacitor/app": "^6.0.1",
"@capacitor/app-launcher": "^6.0.2",
"@capacitor/browser": "^6.0.2",
"@capacitor/camera": "^6.0.2",
"@capacitor/clipboard": "^6.0.1",
"@capacitor/core": "^6.1.2",
"@capacitor/device": "^6.0.1",
"@capacitor/dialog": "^6.0.1",
"@capacitor/filesystem": "^6.0.1",
"@capacitor/geolocation": "^6.0.1",
"@capacitor/keyboard": "^6.0.2",
"@capacitor/network": "^6.0.2",
"@capacitor/preferences": "^6.0.2",
"@capacitor/push-notifications": "^6.0.2",
"@capacitor/share": "^6.0.2",
"@capacitor/splash-screen": "^6.0.2",
"@capacitor/status-bar": "^6.0.1",
"@capacitor/toast": "^6.0.2",
"@date-fns/utc": "^2.1.0",
"@ionic/angular": "8.3.1",
"@ionic/core": "^8.3.2",
"@ionic/pwa-elements": "^3.3.0",
"@jsverse/transloco": "^7.5.0",
"@lottiefiles/dotlottie-web": "^0.35.0",
"@ngneat/error-tailor": "^5.0.1",
"@ngneat/reactive-forms": "^5.0.2",
"@sentry/angular": "8.33.0",
"@sentry/capacitor": "1.0.1",
"@supabase/auth-js": "^2.65.0",
"@tanstack/angular-query-experimental": "^5.59.6",
"@total-typescript/ts-reset": "^0.6.1",
"date-fns": "^4.1.0",
"dot-prop": "latest",
"hash-wasm": "^4.11.0",
"humanize-string": "^3.0.0",
"ionicons": "^7.4.0",
"jose": "^5.9.3",
"marked": "^14.1.2",
"nanoid": "^5.0.7",
"ngxtension": "^4.0.0",
"rxjs": "^7.8.1",
"shiki": "^1.22.0",
"slugify": "^1.6.6",
"sort-on": "^6.1.0",
"swiper": "^11.1.14",
"tslib": "^2.7.0",
"type-fest": "^4.26.1"
},
"devDependencies": {
"@analogjs/vite-plugin-angular": "^1.8.2",
"@analogjs/vitest-angular": "^1.8.2",
"@angular-builders/custom-esbuild": "^18.0.0",
"@angular-devkit/architect": "0.1900.0-next.10",
"@angular-devkit/build-angular": "19.0.0-next.10",
"@angular-devkit/core": "19.0.0-next.10",
"@angular-devkit/schematics": "19.0.0-next.10",
"@angular/build": "19.0.0-next.10",
"@angular/cli": "19.0.0-next.10",
"@angular/compiler": "19.0.0-next.9",
"@angular/compiler-cli": "19.0.0-next.9",
"@biomejs/biome": "^1.9.3",
"@capacitor/android": "^6.1.2",
"@capacitor/cli": "^6.1.2",
"@capacitor/ios": "^6.1.2",
"@commitlint/cli": "^19.5.0",
"@commitlint/config-angular": "^19.5.0",
"@j-ulrich/release-it-regex-bumper": "^5.1.0",
"@jsverse/transloco-keys-manager": "^5.1.0",
"@jsverse/transloco-validator": "^7.0.1",
"@release-it/conventional-changelog": "^8.0.2",
"@sentry/cli": "^2.37.0",
"@types/apple-mapkit-js-browser": "^5.78.1",
"@types/node": "^22.7.5",
"angular-eslint": "^18.3.1",
"conventional-changelog-cli": "^5.0.0",
"dotenv": "^16.4.5",
"eslint": "^9.12.0",
"eslint-plugin-compat": "^6.0.1",
"eslint-plugin-unicorn": "^56.0.0",
"globals": "^15.11.0",
"html-minifier-terser": "^7.2.0",
"lefthook": "^1.7.18",
"prettier": "^3.3.3",
"prettier-plugin-organize-attributes": "^1.0.0",
"release-it": "^17.8.2",
"standard-changelog": "^6.0.0",
"stylelint": "^16.9.0",
"stylelint-config-clean-order": "^6.1.0",
"stylelint-config-standard-scss": "^13.1.0",
"ts-node": "^10.9.2",
"typescript": "^5.6.3",
"typescript-eslint": "^8.8.1",
"vite": "5.4.8",
"vite-tsconfig-paths": "^5.0.1",
"vitest": "^2.1.2"
},
"scripts": {
"postinstall": "[[ -z \"${CI}\" ]] && just || echo"
},
"pnpm": {
"overrides": {
"uuid": "latest",
"semver": "latest",
"@stencil/core": "latest"
},
"peerDependencyRules": {
"ignoreMissing": ["zone.js"],
"allowAny": ["zone.js"]
}
}
}
```
> This also includes some `pnpm` modifiers I am still searching to get to work with Deno if possible
</details>
#### Steps.
> https://codesandbox.io/p/devbox/6jv2wk (could not find a good cloud repl for deno that is free)
1. `deno install --allow-scripts=npm:lefthook,@sentry/cli,@biomejs/biome`
```shell
Warning The following packages are deprecated:
โ โ npm:glob@7.2.3 (Glob versions prior to v9 are no longer supported)
โ โ npm:glob@8.1.0 (Glob versions prior to v9 are no longer supported)
โโ npm:inflight@1.0.6 (This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.)
Warning The following packages contained npm lifecycle scripts (preinstall/install/postinstall) that were not executed:
โ โ npm:@sentry/capacitor@1.0.1
โ โ npm:nx@19.8.4
โ โ npm:lmdb@3.1.3
โ โ npm:lmdb@3.0.13
โ โ npm:nice-napi@1.0.2
โ โ npm:core-js-pure@3.38.1
โ โ npm:msgpackr-extract@3.0.3
โ
โ โ This may cause the packages to not work correctly.
โโ To run lifecycle scripts, use the `--allow-scripts` flag with `deno install`:
deno install --allow-scripts=npm:@sentry/capacitor@1.0.1,npm:nx@19.8.4,npm:lmdb@3.1.3,npm:lmdb@3.0.13,npm:nice-napi@1.0.2,npm:core-js-pure@3.38.1,npm:msgpackr-extract@3.0.3
sync hooks: โ๏ธ (pre-commit, commit-msg)
```
2. `deno run --allow-all node_modules/@angular/cli/bin/ng.js build`
```shell
โ [ERROR] NG2: Type 'FormGroup<{ firstname: FormControl<string>; lastname: FormControl<string>; username: FormControl<string>; gender: FormControl<Gender>; birthday: FormControl<...>; }>' is not assignable to type 'FormGroup<any>'.
Types of property 'controls' are incompatible.
Type '{ firstname: FormControl<string>; lastname: FormControl<string>; username: FormControl<string>; gender: FormControl<Gender>; birthday: FormControl<...>; }' is not assignable to type '{ [key: string]: AbstractControl<any, any>; }'.
Property 'firstname' is incompatible with index signature.
Type 'FormControl<string>' is not assignable to type 'AbstractControl<any, any>'. [plugin angular-compiler]
```
<details>
<summary>Add-on if one is interesting seeing build executiong numbers (but with pnpm as pm)</summary>
```
Direct execution of : node_modules/@angular/cli/bin/ng.js
## Node/pnpm
> node node_modules/@angular/cli/bin/ng.js build
Cold: 8.0s
HotC: 6.3s
## Bun(node)/pnpm
> bun run node_modules/@angular/cli/bin/ng.js build
Cold: 7.3s
HotC: 5.7s
## Bun(--bun)/pnpm
> bun run --bun node_modules/@angular/cli/bin/ng.js build
Cold: NaN
HotC: NaN
## Deno/pnpm
> deno run --allow-all node_modules/@angular/cli/bin/ng.js build
Cold: 8.3s
HotC: 5.8s
```
</details> | bug,install | low | Critical |
2,580,794,428 | rust | ICE: `ConstArgHasType has escaping bound vars, so it cannot be wrapped in a dummy binder.` | <!--
[31mICE[0m: Rustc ./a.rs '-Zcrate-attr=feature(generic_const_exprs) -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/wf.rs:687:21: '`ConstArgHasType(UnevaluatedConst { def: DefId(0:12 ~ a[d35a]::Value::{constant#0}), args: ['^0.Named(DefId(0:13 ~ a[d35a]::Trait::'_), "'_")] }, usize)` has escaping bound vars, so it cannot be wrapped in a dummy binder.'', 'thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/wf.rs:687:21: '`ConstArgHasType(UnevaluatedConst { def: DefId(0:12 ~ a[d35a]::Value::{constant#0}), args: ['^0.Named(DefId(0:13 ~ a[d35a]::Trait::'_), "'_")] }, usize)` has escaping bound vars, so it cannot be wrapped in a dummy binder.''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
#![feature(generic_const_exprs)]
type Value<'v> = &[[u8; SIZE]];
trait Trait: Fn(Value) -> Value {}
````
original:
````rust
type Value<'v> = &[[u8; SIZE]];
trait Trait: Fn(Value) -> Value {}
impl<F: Indexer<Foo> + Indexer<Bar>> Trait for F {
type A: Iterator<Item: Copy>;
//~^ ERROR associated type bounds are unstable
type B: Iterator<Item: 'static>;
//~^ ERROR associated type bounds are unstable
}
fn main() {
let _: Box<dyn Trait> = Box::new(|v: Value| v);
}
````
Version information
````
rustc 1.83.0-nightly (0321e73d1 2024-10-11)
binary: rustc
commit-hash: 0321e73d1cb3f739caa806927344eca6f96257b5
commit-date: 2024-10-11
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zcrate-attr=feature(generic_const_exprs)`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0106]: missing lifetime specifier
--> /tmp/icemaker_global_tempdir.P3GyuBNzInxV/rustc_testrunner_tmpdir_reporting.lgaymy8wn468/mvce.rs:1:18
|
1 | type Value<'v> = &[[u8; SIZE]];
| ^ expected named lifetime parameter
|
help: consider using the `'v` lifetime
|
1 | type Value<'v> = &'v [[u8; SIZE]];
| ++
error[E0425]: cannot find value `SIZE` in this scope
--> /tmp/icemaker_global_tempdir.P3GyuBNzInxV/rustc_testrunner_tmpdir_reporting.lgaymy8wn468/mvce.rs:1:25
|
1 | type Value<'v> = &[[u8; SIZE]];
| ^^^^ not found in this scope
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> <crate attribute>:1:9
|
1 | feature(generic_const_exprs)
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= note: `#[warn(incomplete_features)]` on by default
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.P3GyuBNzInxV/rustc_testrunner_tmpdir_reporting.lgaymy8wn468/mvce.rs:3:35
|
3 | trait Trait: Fn(Value) -> Value {}
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.P3GyuBNzInxV/rustc_testrunner_tmpdir_reporting.lgaymy8wn468/mvce.rs`
thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/wf.rs:687:21:
`ConstArgHasType(UnevaluatedConst { def: DefId(0:6 ~ mvce[ff9f]::Value::{constant#0}), args: ['^0.Named(DefId(0:7 ~ mvce[ff9f]::Trait::'_), "'_")] }, usize)` has escaping bound vars, so it cannot be wrapped in a dummy binder.
stack backtrace:
0: 0x7d62025de57a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h0c0d2e67483a7ce1
1: 0x7d6202e034a6 - core::fmt::write::h19fbfdd1fd25c23b
2: 0x7d62040c4091 - std::io::Write::write_fmt::hf8ab2da8ebc932a0
3: 0x7d62025de3d2 - std::sys::backtrace::BacktraceLock::print::h3d864635116d695a
4: 0x7d62025e08a6 - std::panicking::default_hook::{{closure}}::h2aa94d113a45cc15
5: 0x7d62025e06f0 - std::panicking::default_hook::h97fbe8692dbb54b4
6: 0x7d62016331df - std[b2c81c0a9485f2e4]::panicking::update_hook::<alloc[c9d20cda7901ded3]::boxed::Box<rustc_driver_impl[8c3cd927c140555]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7d62025e0fb8 - std::panicking::rust_panic_with_hook::hc4ff1f67175d51bb
8: 0x7d62025e0d8a - std::panicking::begin_panic_handler::{{closure}}::h064adafafa3d3d91
9: 0x7d62025dea29 - std::sys::backtrace::__rust_end_short_backtrace::hfd346037bc0f3979
10: 0x7d62025e0a4c - rust_begin_unwind
11: 0x7d620004cd00 - core::panicking::panic_fmt::hcd11b748a515a987
12: 0x7d61ff2b7c5e - <rustc_trait_selection[9580d8cfdaacc399]::traits::wf::WfPredicates as rustc_type_ir[89eb6eb643500842]::visit::TypeVisitor<rustc_middle[3ffb7b0bf421448c]::ty::context::TyCtxt>>::visit_ty
13: 0x7d620358cc09 - rustc_trait_selection[9580d8cfdaacc399]::traits::wf::clause_obligations
14: 0x7d62035908bd - rustc_hir_analysis[94092c3b015c4ea3]::check::wfcheck::check_where_clauses
15: 0x7d6203563b9c - rustc_hir_analysis[94092c3b015c4ea3]::check::wfcheck::check_trait
16: 0x7d62007c1787 - rustc_hir_analysis[94092c3b015c4ea3]::check::wfcheck::check_well_formed
17: 0x7d62037cee6b - rustc_query_impl[fa1174e4896087c1]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[fa1174e4896087c1]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3ffb7b0bf421448c]::query::erase::Erased<[u8; 1usize]>>
18: 0x7d62037ce5d1 - rustc_query_system[3cba1a04b5b15b8a]::query::plumbing::try_execute_query::<rustc_query_impl[fa1174e4896087c1]::DynamicConfig<rustc_query_system[3cba1a04b5b15b8a]::query::caches::VecCache<rustc_span[6af75d8d312d6eec]::def_id::LocalDefId, rustc_middle[3ffb7b0bf421448c]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[fa1174e4896087c1]::plumbing::QueryCtxt, false>
19: 0x7d62037ce250 - rustc_query_impl[fa1174e4896087c1]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
20: 0x7d62037cf0ff - rustc_hir_analysis[94092c3b015c4ea3]::check::wfcheck::check_mod_type_wf
21: 0x7d62037cef25 - rustc_query_impl[fa1174e4896087c1]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[fa1174e4896087c1]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3ffb7b0bf421448c]::query::erase::Erased<[u8; 1usize]>>
22: 0x7d6203bb7b7b - rustc_query_system[3cba1a04b5b15b8a]::query::plumbing::try_execute_query::<rustc_query_impl[fa1174e4896087c1]::DynamicConfig<rustc_query_system[3cba1a04b5b15b8a]::query::caches::DefaultCache<rustc_span[6af75d8d312d6eec]::def_id::LocalModDefId, rustc_middle[3ffb7b0bf421448c]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[fa1174e4896087c1]::plumbing::QueryCtxt, false>
23: 0x7d6203bb792d - rustc_query_impl[fa1174e4896087c1]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
24: 0x7d62032340fb - rustc_hir_analysis[94092c3b015c4ea3]::check_crate
25: 0x7d6203230e57 - rustc_interface[b91e754c6773c007]::passes::run_required_analyses
26: 0x7d620391fb1e - rustc_interface[b91e754c6773c007]::passes::analysis
27: 0x7d620391faf1 - rustc_query_impl[fa1174e4896087c1]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[fa1174e4896087c1]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3ffb7b0bf421448c]::query::erase::Erased<[u8; 1usize]>>
28: 0x7d6203d26dae - rustc_query_system[3cba1a04b5b15b8a]::query::plumbing::try_execute_query::<rustc_query_impl[fa1174e4896087c1]::DynamicConfig<rustc_query_system[3cba1a04b5b15b8a]::query::caches::SingleCache<rustc_middle[3ffb7b0bf421448c]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[fa1174e4896087c1]::plumbing::QueryCtxt, false>
29: 0x7d6203d26a8f - rustc_query_impl[fa1174e4896087c1]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
30: 0x7d6203b4edde - rustc_interface[b91e754c6773c007]::interface::run_compiler::<core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>, rustc_driver_impl[8c3cd927c140555]::run_compiler::{closure#0}>::{closure#1}
31: 0x7d6203c32d10 - std[b2c81c0a9485f2e4]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[b91e754c6773c007]::util::run_in_thread_with_globals<rustc_interface[b91e754c6773c007]::util::run_in_thread_pool_with_globals<rustc_interface[b91e754c6773c007]::interface::run_compiler<core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>, rustc_driver_impl[8c3cd927c140555]::run_compiler::{closure#0}>::{closure#1}, core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>>::{closure#0}, core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>>
32: 0x7d6203c333d7 - <<std[b2c81c0a9485f2e4]::thread::Builder>::spawn_unchecked_<rustc_interface[b91e754c6773c007]::util::run_in_thread_with_globals<rustc_interface[b91e754c6773c007]::util::run_in_thread_pool_with_globals<rustc_interface[b91e754c6773c007]::interface::run_compiler<core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>, rustc_driver_impl[8c3cd927c140555]::run_compiler::{closure#0}>::{closure#1}, core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>>::{closure#0}, core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[15204f05ba262a1b]::result::Result<(), rustc_span[6af75d8d312d6eec]::ErrorGuaranteed>>::{closure#1} as core[15204f05ba262a1b]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
33: 0x7d6203c342c1 - std::sys::pal::unix::thread::Thread::new::thread_start::h4694c2beab690665
34: 0x7d62053ab39d - <unknown>
35: 0x7d620543049c - <unknown>
36: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.83.0-nightly (0321e73d1 2024-10-11) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z crate-attr=feature(generic_const_exprs) -Z dump-mir-dir=dir
query stack during panic:
#0 [check_well_formed] checking that `Trait` is well-formed
#1 [check_mod_type_wf] checking that types are well-formed in top-level module
end of query stack
error: aborting due to 3 previous errors; 1 warning emitted
Some errors have detailed explanations: E0106, E0425, E0601.
For more information about an error, try `rustc --explain E0106`.
```
</p>
</details>
<!--
query stack:
`ConstArgHasType(UnevaluatedConst { def: DefId(0:6 ~ mvce[ff9f]::Value::{constant#0}), args: ['^0.Named(DefId(0:7 ~ mvce[ff9f]::Trait::'_), "'_")] }, usize)` has escaping bound vars, so it cannot be wrapped in a dummy binder.
#0 [check_well_formed] checking that `Trait` is well-formed
#1 [check_mod_type_wf] checking that types are well-formed in top-level module
-->
@rustbot label +F-generic_const_exprs
| I-ICE,T-compiler,C-bug,F-generic_const_exprs,S-bug-has-test | low | Critical |
2,580,802,145 | flutter | [Web] canvas.js seems to fail to load randomly | ### Steps to reproduce
Unfortunately I don't have a reproduction scenario, since this is a feedback I got from some of the users of my Flutter Web project in production.
It is not something I have seen in Debug either.
### Expected results
The web app should load consistantly
### Actual results
In some random cases, the users with the issue can see this error message in the chrome console:
`Cannot use 'import.meta' outside a module` coming from the `canvaskit.js` file
### Code sample
<details open><summary>Code sample</summary>
This is what my startup code in JS looks like, that is referenced in the screenshot as the `flutter_bootstrap.js` file:
```js
function launchFlutter(){
var version = {{flutter_service_worker_version}};
_flutter.loader.load({
config: {
// https://github.com/flutter/flutter/issues/148713#issuecomment-2129976660
canvasKitBaseUrl: "/canvaskit/",
},
serviceWorkerSettings: {
serviceWorkerVersion: version,
},
onEntrypointLoaded: async function(engineInitializer) {
const appRunner = await engineInitializer.initializeEngine({
useColorEmoji: true
});
loading.remove();
await appRunner.runApp();
}
});
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="732" alt="Screenshot 2024-10-11 at 10 36 54" src="https://github.com/user-attachments/assets/42b0e5a1-1562-45e3-adb2-40d1a7f6972e">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on macOS 14.5 23F79 darwin-arm64, locale en-FR)
โข Flutter version 3.24.3 on channel stable at /Users/adrien.padol/fvm/versions/3.24.3
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc3)
โข Android SDK at /Users/adrien.padol/Library/Android/sdk
โข Platform android-34, build-tools 35.0.0-rc3
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2023.3)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[โ] VS Code (version 1.93.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.98.0
[โ] Connected device (4 available)
โข iPhone de Adrien (mobile) โข 00008101-0015250C3A82001E โข ios โข iOS 18.0.1 22A3370
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.5 23F79 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.5 23F79 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.100
! Error: Browsing on the local area network for iPhone de Remi. Ensure the device is unlocked and attached with a cable or associated
with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for iPad de Adrien. Ensure the device is unlocked and attached with a cable or associated
with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| platform-web,a: production,team-web | low | Critical |
2,580,812,561 | ui | Wheel Time Picker | ### Feature description
something similar to this:

### Affected component/components
DatePicker
### Additional Context
_No response_
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,580,817,264 | tauri | [feat] tauri-driver: Append platform-specific file extension to `tauri:options.application` | ### Describe the problem
[tauri-driver](https://tauri.app/develop/tests/webdriver/) lets you specify an application binary path in a platform-neutral manner. It does so by automatically [rewriting](https://github.com/tauri-apps/tauri/blob/1d3f51e100b0efc0e4ce164796460e9acdc458da/crates/tauri-driver/src/server.rs#L33-L55) a neutral WebDriver capability (`tauri:options.application`) to engine-specific one (e.g., `webkitgtk:browserOptions.binary`). However, this does not completely abstract away platform differences because you still need to include a file extension (`.exe`) on Windows and not on other platforms.
### Describe the solution you'd like
tauri-driver could automatically append a platform-specific executable file extension.
### Alternatives considered
Manual edit is possible but has the following downsides:
- If you forget to do this, diagnosing the problem can be hard (some test tools don't print the error, so you need `tcpdump` to check the error message)
- You need to be careful not to accidentally commit the change to a version control system
Some test tools (e.g., Selenium, WebdriverIO) allow you to programmatically generate WebDriver capabilities. This is not possible on others (e.g., wasm-bindgen-test-runner).
### Additional context
_No response_ | type: feature request,scope: webdriver | low | Critical |
2,580,910,440 | flutter | [shared_preferences] On Web, exception on initialization when a stored value is not valid JSON | ### Steps to reproduce
Create a web project.
In my case, use secure_storage to store some value. It is stored as plain text.
### Expected results
The value that cannot be decoded should be ignored.
### Actual results
When trying to initialize from this local storage:

I get this exception:
```
[2024-10-11 11:01:16.394 | Catcher 2 | INFO] ---------- ERROR ----------
[2024-10-11 11:01:16.396 | Catcher 2 | INFO] FormatException: SyntaxError: Unexpected token 'k', "k6IlhTmQIL"... is not valid JSON
[2024-10-11 11:01:16.396 | Catcher 2 | INFO]
[2024-10-11 11:01:16.397 | Catcher 2 | INFO] ------- STACK TRACE -------
[2024-10-11 11:01:16.737 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 296:3 throw_
[2024-10-11 11:01:16.737 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/convert_patch.dart 36:5 _parseJson
[2024-10-11 11:01:16.738 | Catcher 2 | INFO] dart-sdk/lib/convert/json.dart 610:36 convert
[2024-10-11 11:01:16.738 | Catcher 2 | INFO] dart-sdk/lib/convert/json.dart 216:41 decode
[2024-10-11 11:01:16.738 | Catcher 2 | INFO] packages/shared_preferences_web/shared_preferences_web.dart 262:37 _decodeValue
[2024-10-11 11:01:16.739 | Catcher 2 | INFO] packages/shared_preferences_web/shared_preferences_web.dart 135:22 _readAllFromLocalStorage
[2024-10-11 11:01:16.739 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 84:54 runBody
[2024-10-11 11:01:16.739 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 127:5 _async
[2024-10-11 11:01:16.739 | Catcher 2 | INFO] packages/shared_preferences_web/shared_preferences_web.dart 129:55 [_readAllFromLocalStorage]
[2024-10-11 11:01:16.740 | Catcher 2 | INFO] packages/shared_preferences_web/shared_preferences_web.dart 126:12 getPreferences
[2024-10-11 11:01:16.740 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 84:54 runBody
[2024-10-11 11:01:16.740 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 127:5 _async
[2024-10-11 11:01:16.740 | Catcher 2 | INFO] packages/shared_preferences_web/shared_preferences_web.dart 122:45 getPreferences
[2024-10-11 11:01:16.741 | Catcher 2 | INFO] packages/shared_preferences/src/shared_preferences_async.dart 55:22 getAll
[2024-10-11 11:01:16.741 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 84:54 runBody
[2024-10-11 11:01:16.741 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 127:5 _async
[2024-10-11 11:01:16.742 | Catcher 2 | INFO] packages/shared_preferences/src/shared_preferences_async.dart 52:38 getAll
[2024-10-11 11:01:16.742 | Catcher 2 | INFO] packages/shared_preferences/src/shared_preferences_async.dart 225:32 reloadCache
[2024-10-11 11:01:16.742 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 84:54 runBody
[2024-10-11 11:01:16.743 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 127:5 _async
[2024-10-11 11:01:16.743 | Catcher 2 | INFO] packages/shared_preferences/src/shared_preferences_async.dart 222:27 reloadCache
[2024-10-11 11:01:16.743 | Catcher 2 | INFO] packages/shared_preferences/src/shared_preferences_async.dart 200:22 create
[2024-10-11 11:01:16.744 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 84:54 runBody
[2024-10-11 11:01:16.744 | Catcher 2 | INFO] dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 127:5 _async
[2024-10-11 11:01:16.744 | Catcher 2 | INFO] packages/shared_preferences/src/shared_preferences_async.dart 187:51 create
```
### Code sample
<details open><summary>Code sample</summary>
```dart
final sharedPreferences = await SharedPreferencesWithCache.create(
cacheOptions: const SharedPreferencesWithCacheOptions());
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on Microsoft Windows [version 10.0.22631.4169], locale fr-FR)
โข Flutter version 3.24.3 on channel stable at c:\Vrac\flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at C:\Users\gdurand\AppData\Local\Android\Sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
โข All Android licenses accepted.
[โ] Chrome - develop for the web
โข Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.5)
โข Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
โข Visual Studio Community 2022 version 17.11.35327.3
โข Windows 10 SDK version 10.0.22621.0
[โ] Android Studio (version 2024.1)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
[โ] IntelliJ IDEA Community Edition (version 2020.3)
โข IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2020.1.1
โข Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
[โ] VS Code (version 1.94.1)
โข VS Code at C:\Users\gdurand\AppData\Local\Programs\Microsoft VS Code
โข Flutter extension version 3.98.0
[โ] Connected device (4 available)
โข SM G930U (mobile) โข b86bd847 โข android-arm64 โข Android 8.0.0 (API 26)
โข Windows (desktop) โข windows โข windows-x64 โข Microsoft Windows [version 10.0.22631.4169]
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.90
โข Edge (web) โข edge โข web-javascript โข Microsoft Edge 129.0.2792.65
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| platform-web,p: shared_preferences,package,P2,team-web,triaged-web | low | Critical |
2,580,951,728 | tauri | [bug] Running "api" example on macos fails | ### Describe the bug
git clone of the 2.02 version or Tauri.
```
cd ./examples/api
pnpm tauri dev
```
Error: Cannot find module '@tauri-apps/cli-darwin-arm64'
Require stack:
- /Users/xxxx/Downloads/tauri-tauri-v2.0.2/packages/cli/index.js
- /Users/xxxx/Downloads/tauri-tauri-v2.0.2/packages/cli/main.js
- /Users/xxxx/Downloads/tauri-tauri-v2.0.2/packages/cli/tauri.js
at Module._resolveFilename (node:internal/modules/cjs/loader:1248:15)
at Module._load (node:internal/modules/cjs/loader:1074:27)
at TracingChannel.traceSync (node:diagnostics_channel:315:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:217:24)
at Module.require (node:internal/modules/cjs/loader:1339:12)
at require (node:internal/modules/helpers:126:16)
at Object.<anonymous> (/Users/xxxx/Downloads/tauri-tauri-v2.0.2/packages/cli/index.js:145:29)
at Module._compile (node:internal/modules/cjs/loader:1546:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1691:10)
at Module.load (node:internal/modules/cjs/loader:1317:32) {
code: 'MODULE_NOT_FOUND',
### Reproduction
As above.
### Expected behavior
The readme.md file mentions running `bash .scripts/setup.sh`, but the setup.sh file no longer exists.
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 14.6.1 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 22.7.0
- pnpm: 9.12.1
- yarn: 1.22.22
- npm: 10.8.2
[-] Packages
- tauri ๐ฆ: 2.0.2
- tauri-build ๐ฆ: 2.0.1
- wry ๐ฆ: 0.44.1
- tao ๐ฆ: 0.30.2
- tauri-cli ๐ฆ: 2.0.2
- @tauri-apps/api ๎: 2.0.2
- @tauri-apps/cli ๎: 2.0.2
[-] Plugins
- tauri-plugin-log ๐ฆ: 2.0.0-rc.2
- @tauri-apps/plugin-log ๎: not installed!
[-] App
- build-type: bundle
- CSP: img-src 'self' asset: http://asset.localhost blob: data:; style-src 'unsafe-inline' 'self' https://fonts.googleapis.com; connect-src ipc: http://ipc.localhost; default-src 'self' customprotocol: asset:; font-src https://fonts.gstatic.com
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,type: documentation,status: needs triage | low | Critical |
2,580,988,887 | deno | better handling of package.json in a workspace member | Currently we only enable BYONM if the package.json is at the root. If the package.json is in a workspace member, we silently stop using BYONM and start using the global cache. This leads to confusing situations like #26125. | feat,cli | low | Minor |
2,581,037,816 | neovim | API: nvim_input returns a "ticket" which can be used to confirm when the input is processed | ### Problem
No reliable way for clients to know when a specific `nvim_input` input has been processed.
### Expected behavior
Change `nvim_input` to return a unique id. Pass that id to `vim.on_key` callback, which can be used to confirm that the key was "processed".
Open questions:
- is `on_key` the right place to signal "the input was procssed", or is there a later stage in the lifecycle (e.g. the execution of the mapping/normal command/etc)?
Related/duplicate: https://github.com/neovim/neovim/issues/10826 | enhancement,api,input,event-loop,mappings,normal-mode | low | Minor |
2,581,094,568 | langchain | [Azure CosmosDB NoSQL] similarity_search_with_score function returning CosmosHttpResponseError "bad request" | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Sample for reproducing the issue
```python
import os
from dotenv import load_dotenv
from azure.cosmos import CosmosClient, PartitionKey
from langchain_community.vectorstores.azure_cosmos_db_no_sql import (
AzureCosmosDBNoSqlVectorSearch,
)
from langchain_openai import OpenAIEmbeddings
# from azure_cosmos_db_no_sql import AzureCosmosDBNoSqlVectorSearch # workarround with modifications to code
_ = load_dotenv()
HOST = os.environ['AZURE_COSMOS_URL']
KEY = os.environ['AZURE_COSMOS_KEY']
cosmos_client = CosmosClient(HOST, KEY)
database_name = "db"
container_name = "container1"
partition_key = PartitionKey(path="/id")
cosmos_container_properties = {"partition_key": partition_key}
openai_embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
vectorstore = AzureCosmosDBNoSqlVectorSearch(
cosmos_client=cosmos_client,
embedding=openai_embeddings,
vector_embedding_policy={
"vectorEmbeddings": [
{
"path": "/embedding",
"dataType": "float32",
"distanceFunction": "cosine",
"dimensions": 3072,
}
]
},
indexing_policy={
"indexingMode": "consistent",
"includedPaths": [{"path": "/*"}],
"excludedPaths": [{"path": '/"_etag"/?'}],
"vectorIndexes": [{"path": "/embedding", "type": "quantizedFlat"}],
},
cosmos_container_properties=cosmos_container_properties,
cosmos_database_properties={'id': database_name},
database_name=database_name,
container_name=container_name,
create_container=False,
)
query = "What were the compute requirements for training GPT 4"
results = vectorstore.similarity_search_with_score(
query=query,
)
````
### Error Message and Stack Trace (if applicable)
{
"name": "CosmosHttpResponseError",
"message": "(BadRequest) One of the input values is invalid.\r
ActivityId: 73e70e21-b8f3-4eb7-bb26-a85e03ae05fc, Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0
Code: BadRequest
Message: One of the input values is invalid.\r
ActivityId: 73e70e21-b8f3-4eb7-bb26-a85e03ae05fc, Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0",
"stack": "---------------------------------------------------------------------------
CosmosHttpResponseError Traceback (most recent call last)
Cell In[7], line 3
1 query = \"What were the compute requirements for training GPT 4\"
----> 3 results = vectorstore.similarity_search_with_score(
4 query=query,
5 k=5,
6 )
8 # Display results
9 for result in results:
File ~/git/demos/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:322, in AzureCosmosDBNoSqlVectorSearch.similarity_search_with_score(self, query, k, pre_filter, with_embedding)
314 def similarity_search_with_score(
315 self,
316 query: str,
(...)
319 with_embedding: bool = False,
320 ) -> List[Tuple[Document, float]]:
321 embeddings = self._embedding.embed_query(query)
--> 322 docs_and_scores = self._similarity_search_with_score(
323 embeddings=embeddings,
324 k=k,
325 pre_filter=pre_filter,
326 with_embedding=with_embedding,
327 )
328 return docs_and_scores
File ~/git/demos/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:298, in AzureCosmosDBNoSqlVectorSearch._similarity_search_with_score(self, embeddings, k, pre_filter, with_embedding)
290 parameters = [
291 {\"name\": \"@limit\", \"value\": k},
292 {\"name\": \"@embeddingKey\", \"value\": self._embedding_key},
293 {\"name\": \"@embeddings\", \"value\": embeddings},
294 ]
296 docs_and_scores = []
--> 298 items = list(
299 self._container.query_items(
300 query=query, parameters=parameters, enable_cross_partition_query=True
301 )
302 )
303 for item in items:
304 text = item[\"text\"]
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/core/paging.py:123, in ItemPaged.__next__(self)
121 if self._page_iterator is None:
122 self._page_iterator = itertools.chain.from_iterable(self.by_page())
--> 123 return next(self._page_iterator)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/core/paging.py:75, in PageIterator.__next__(self)
73 raise StopIteration(\"End of paging\")
74 try:
---> 75 self._response = self._get_next(self.continuation_token)
76 except AzureError as error:
77 if not error.continuation_token:
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_query_iterable.py:99, in QueryIterable._fetch_next(self, *args)
89 def _fetch_next(self, *args): # pylint: disable=unused-argument
90 \"\"\"Return a block of results with respecting retry policy.
91
92 This method only exists for backward compatibility reasons. (Because
(...)
97 :rtype: list
98 \"\"\"
---> 99 block = self._ex_context.fetch_next_block()
100 if not block:
101 raise StopIteration
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/execution_dispatcher.py:110, in _ProxyQueryExecutionContext.fetch_next_block(self)
108 self._execution_context = self._create_pipelined_execution_context(query_execution_info)
109 else:
--> 110 raise e
112 return self._execution_context.fetch_next_block()
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/execution_dispatcher.py:102, in _ProxyQueryExecutionContext.fetch_next_block(self)
93 \"\"\"Returns a block of results.
94
95 This method only exists for backward compatibility reasons. (Because
(...)
99 :rtype: list
100 \"\"\"
101 try:
--> 102 return self._execution_context.fetch_next_block()
103 except CosmosHttpResponseError as e:
104 if _is_partitioned_execution_info(e):
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/base_execution_context.py:79, in _QueryExecutionContextBase.fetch_next_block(self)
70 def fetch_next_block(self):
71 \"\"\"Returns a block of results with respecting retry policy.
72
73 This method only exists for backward compatibility reasons. (Because
(...)
77 :rtype: list
78 \"\"\"
---> 79 self._ensure()
80 res = list(self._buffer)
81 self._buffer.clear()
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/base_execution_context.py:64, in _QueryExecutionContextBase._ensure(self)
61 return
63 if not self._buffer:
---> 64 results = self._fetch_next_block()
65 self._buffer.extend(results)
67 if not self._buffer:
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/base_execution_context.py:175, in _DefaultQueryExecutionContext._fetch_next_block(self)
173 def _fetch_next_block(self): # pylint: disable=inconsistent-return-statements
174 while super(_DefaultQueryExecutionContext, self)._has_more_pages() and not self._buffer:
--> 175 return self._fetch_items_helper_with_retries(self._fetch_function)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/base_execution_context.py:147, in _QueryExecutionContextBase._fetch_items_helper_with_retries(self, fetch_function)
144 def callback():
145 return self._fetch_items_helper_no_retries(fetch_function)
--> 147 return _retry_utility.Execute(self._client, self._client._global_endpoint_manager, callback)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_retry_utility.py:87, in Execute(client, global_endpoint_manager, function, *args, **kwargs)
85 result = ExecuteFunction(function, global_endpoint_manager, *args, **kwargs)
86 else:
---> 87 result = ExecuteFunction(function, *args, **kwargs)
88 if not client.last_response_headers:
89 client.last_response_headers = {}
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_retry_utility.py:149, in ExecuteFunction(function, *args, **kwargs)
142 def ExecuteFunction(function, *args, **kwargs):
143 \"\"\"Stub method so that it can be used for mocking purposes as well.
144 :param Callable function: the function to execute.
145 :param list args: the explicit arguments for the function.
146 :returns: the result of executing the function with the passed in arguments
147 :rtype: tuple(dict, dict)
148 \"\"\"
--> 149 return function(*args, **kwargs)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/base_execution_context.py:145, in _QueryExecutionContextBase._fetch_items_helper_with_retries.<locals>.callback()
144 def callback():
--> 145 return self._fetch_items_helper_no_retries(fetch_function)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_execution_context/base_execution_context.py:126, in _QueryExecutionContextBase._fetch_items_helper_no_retries(self, fetch_function)
123 new_options[\"continuation\"] = self._continuation
125 response_headers = {}
--> 126 (fetched_items, response_headers) = fetch_function(new_options)
128 continuation_key = http_constants.HttpHeaders.Continuation
129 # Use Etag as continuation token for change feed queries.
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_cosmos_client_connection.py:1065, in CosmosClientConnection.QueryItems.<locals>.fetch_fn(options)
1064 def fetch_fn(options: Mapping[str, Any]) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
-> 1065 return self.__QueryFeed(
1066 path,
1067 \"docs\",
1068 collection_id,
1069 lambda r: r[\"Documents\"],
1070 lambda _, b: b,
1071 query,
1072 options,
1073 response_hook=response_hook,
1074 **kwargs)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_cosmos_client_connection.py:3092, in CosmosClientConnection.__QueryFeed(self, path, resource_type, resource_id, result_fn, create_fn, query, options, partition_key_range_id, response_hook, is_query_plan, **kwargs)
3089 if results:
3090 return __GetBodiesFromQueryResult(results), last_response_headers
-> 3092 result, last_response_headers = self.__Post(path, request_params, query, req_headers, **kwargs)
3093 if last_response_headers.get(http_constants.HttpHeaders.IndexUtilization) is not None:
3094 INDEX_METRICS_HEADER = http_constants.HttpHeaders.IndexUtilization
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_cosmos_client_connection.py:2811, in CosmosClientConnection.__Post(self, path, request_params, body, req_headers, **kwargs)
2801 \"\"\"Azure Cosmos 'POST' http request.
2802
2803 :param str path: the url to be used for the request.
(...)
2808 :rtype: tuple of (dict, dict)
2809 \"\"\"
2810 request = self.pipeline_client.post(url=path, headers=req_headers)
-> 2811 return synchronized_request.SynchronizedRequest(
2812 client=self,
2813 request_params=request_params,
2814 global_endpoint_manager=self._global_endpoint_manager,
2815 connection_policy=self.connection_policy,
2816 pipeline_client=self.pipeline_client,
2817 request=request,
2818 request_data=body,
2819 **kwargs
2820 )
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_synchronized_request.py:204, in SynchronizedRequest(client, request_params, global_endpoint_manager, connection_policy, pipeline_client, request, request_data, **kwargs)
201 request.headers[http_constants.HttpHeaders.ContentLength] = 0
203 # Pass _Request function with its parameters to retry_utility's Execute method that wraps the call with retries
--> 204 return _retry_utility.Execute(
205 client,
206 global_endpoint_manager,
207 _Request,
208 request_params,
209 connection_policy,
210 pipeline_client,
211 request,
212 **kwargs
213 )
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_retry_utility.py:85, in Execute(client, global_endpoint_manager, function, *args, **kwargs)
83 try:
84 if args:
---> 85 result = ExecuteFunction(function, global_endpoint_manager, *args, **kwargs)
86 else:
87 result = ExecuteFunction(function, *args, **kwargs)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_retry_utility.py:149, in ExecuteFunction(function, *args, **kwargs)
142 def ExecuteFunction(function, *args, **kwargs):
143 \"\"\"Stub method so that it can be used for mocking purposes as well.
144 :param Callable function: the function to execute.
145 :param list args: the explicit arguments for the function.
146 :returns: the result of executing the function with the passed in arguments
147 :rtype: tuple(dict, dict)
148 \"\"\"
--> 149 return function(*args, **kwargs)
File ~/git/demos/.venv/lib/python3.10/site-packages/azure/cosmos/_synchronized_request.py:155, in _Request(global_endpoint_manager, request_params, connection_policy, pipeline_client, request, **kwargs)
153 raise exceptions.CosmosAccessConditionFailedError(message=data, response=response)
154 if response.status_code >= 400:
--> 155 raise exceptions.CosmosHttpResponseError(message=data, response=response)
157 result = None
158 if data:
CosmosHttpResponseError: (BadRequest) One of the input values is invalid.\r
ActivityId: 73e70e21-b8f3-4eb7-bb26-a85e03ae05fc, Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0
Code: BadRequest
Message: One of the input values is invalid.\r
ActivityId: 73e70e21-b8f3-4eb7-bb26-a85e03ae05fc, Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0"
}
### Description
I'm using langchain_community.vectorstores.azure_cosmos_db_no_sql to set up a document retriever or do similarity search. However, in my current case, every query fails, when it's being set up as shown in the code.
The container has been created using the .from_document() function. When using the returned object from this function, the similarity search works as expected. I didn't yet understand why this was the case. Anyway, I must be able to load and then access the vector store individually in my use case.
As of my current understanding, the issue relays in the query section of the code:
Original:
```python
# ...
# Line 276
query += (
"c.id, c.{}, c.text, c.metadata, "
"VectorDistance(c.@embeddingKey, @embeddings) AS SimilarityScore FROM c"
)
# ...
````
Modified
```python
query += (
"c.id, c.{}, c.text, c.metadata, "
"VectorDistance(c[@embeddingKey], @embeddings) AS SimilarityScore FROM c"
)
#...
# Check and remove any remaining empty placeholder (if there are any)
if "c.{}" in query:
query = query.replace("c.{},", "") # Clean up empty placeholders
# ...
````
Note the change in syntax for the embeddingKey parameter. As of my research, Cosmos DB's SQL query syntax doesn't support parameterizing field names, like 'c.@embeddingKey'. It should work with 'c[@embeddingKey]'. (https://stackoverflow.com/questions/56121759/parameterized-input-with-document-fields-with-cosmos-db, sadly the links to MS Docs are broken).
Thereafter, the 'c.{}' in the query, which I honestly didn't fully understand why it's used, causes another syntax issue. After reviewing the query while debugging, I found it to be never replaced in my case. So I added a silly line to strip it, if "empty". Probably not the best solution.
I tried with a modified version of azure_cosmos_db_no_sql.py in my app, and then the similarity search as well as the retriever started working as expected.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.132
> langchain_experimental: 0.3.2
> langchain_openai: 0.2.2
> langchain_pinecone: 0.2.0
> langchain_text_splitters: 0.3.0
> langchainhub: 0.1.21
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.51.1
> orjson: 3.10.7
> packaging: 23.2
> pinecone-client: 5.0.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> types-requests: 2.32.0.20240914
> typing-extensions: 4.12.2 | โฑญ: vector store,investigate | low | Critical |
2,581,118,661 | Python | Adding Grover's Search Algorithm Using Qiskit | ### Feature description
Groverโs algorithm is a quantum algorithm that solves the **unstructured search problem**. In an unstructured search problem, we are given a set of N elements and we want to find a single marked element. A **classical computer** would need to search through all N elements in order to find the marked element, which would take time **O(N)**. **Groverโs algorithm**, on the other hand, can find the marked element in time **O(โ N).**
Please assign this task to me under **Hacktoberfest 2024**. | enhancement | medium | Minor |
2,581,126,314 | pytorch | torch.arange bf16 results are not accurate | ### ๐ Describe the bug
Output of arange bf16 is not correct. use below code to reproduce
```
torch.arange(241,273, dtype=torch.bfloat16)
```
Output:
```
tensor([241., 242., 243., 244., 245., 246., 247., 248., 249., 250., 251., 252., 253., 254., 255., **256., 256., 256**., 258., 260., 260., 260., 262., 264., 264., 264., 266., 268., 268., 268., 270., 272.], dtype=torch.bfloat16)
```
Here after 255 the results are shown as [256, 256, 256] which is wrong as in f32 output is [256, 257, 258] but 257 is not representable in bf16 hence it should be [256, 256, 258]
### Versions
torch==2.4.1
```[tasklist]
### Tasks
```
cc @svekars @brycebortree @sekyondaMeta @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @albanD | module: docs,low priority,module: cpu,triaged,module: bfloat16,module: python frontend | low | Critical |
2,581,156,731 | deno | Official Cloud Native Buildpack for Deno | Please consider creating official Deno [Cloud Native Buildpack](https://buildpacks.io/), this would make deployment to Heroku, Digital Ocean Apps Platform and Google Cloud a breeze. | suggestion | low | Minor |
2,581,194,522 | flutter | Web: Broken Scrolling behavior on Firefox | ### Steps to reproduce
1. Open the app in Firefox.
2. Start scrolling by holding down the left mouse button.
3. Scroll quickly outside of the browser window and release the left mouse button.
### Expected results
The scroll should snap back, and items inside it should function normally.
### Actual results
The scrolling breaks, and the ScrollView stops functioning as expected; it does not snap back, and the behavior changes. You can no longer perform actions such as onPress if there is a button inside the scrollable widget.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:ui';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
scrollBehavior: const MaterialScrollBehavior().copyWith(
dragDevices: {
PointerDeviceKind.mouse,
PointerDeviceKind.touch,
PointerDeviceKind.stylus,
PointerDeviceKind.unknown,
},
),
home: const ImageScreen(),
);
}
}
class ImageScreen extends StatefulWidget {
const ImageScreen({super.key});
@override
State<ImageScreen> createState() => _ImageScreenState();
}
class _ImageScreenState extends State<ImageScreen> {
final ScrollController _scrollController = ScrollController();
@override
void initState() {
super.initState();
}
@override
void dispose() {
_scrollController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('SingleChildScrollView Maus-Scrolling'),
),
body: SingleChildScrollView(
controller: _scrollController,
physics: const AlwaysScrollableScrollPhysics(),
child: Column(
children: List.generate(
30,
(index) {
return Container(
height: 100,
margin: const EdgeInsets.all(8.0),
color: Colors.blueAccent,
child: Center(
child: Text(
'Item ${index + 1}',
style: const TextStyle(color: Colors.white, fontSize: 20),
),
),
);
},
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/c6700a3a-8f86-4892-b2e9-964dc07fba7b
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.0, on macOS 14.4 23E214 darwin-x64, locale en-GB)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 15.3)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2022.2)
[โ] VS Code (version 1.86.1)
[โ] Connected device (1 available)
[โ] Network resources
โข No issues found!
```
</details>
| framework,f: scrolling,f: gestures,has reproducible steps,browser: firefox,P3,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Critical |
2,581,234,796 | TypeScript | TS 5.6 `files` missing from `--showConfig` (with absolute / ``${configDir}`` `include` paths) | ### ๐ Search Terms
"--showConfig", "include", "configDir", "files", "extends", "absolute"
### ๐ Version & Regression Information
- Tested versions:
- `3.2.1` (`--showConfig` was introduced here)
- `3.2.4`
- `5.5.2` (`${configDir}` was introduced here)
- `latest` => `5.6.3`
- `next` =>`5.7.0-dev.20241011`
- This is the behaviour in every version I tried, and I reviewed the FAQ for entries about "configDir", "showConfig", "absolute".
- I was unable to test this on prior versions because `--showConfig` cli flag was introduced in TS 3.2 and `configDir` template variable was introduced in TS 5.5.
### โฏ Playground Link
_No response_
### ๐ป Code
```jsonc
// tsconfig-rel.json
{
//...
"include": ["src/**/*"]
}
```
```jsonc
// tsconfig-abs.json
{
//...
"include": ["/path/to/src/**/*"]
}
```
```jsonc
// tsconfig-cnf.json
{
//...
"include": ["${configDir}/src/**/*"]
}
```
```ts
// src/index.ts
console.log("dummy");
```
### ๐ Actual behavior
โ **Printing config** yields different results.
```sh
npx tsc --project tsconfig-rel.json --showConfig
npx tsc --project tsconfig-abs.json --showConfig
npx tsc --project tsconfig-cnf.json --showConfig
```
- For absolute include paths the **`files` property is missing**.
- `${configDir}` is resolved to and behaves / fails the same as absolute paths.
```diff
{
"compilerOptions": {...},
- "files": [
- "./src/index.ts"
- ],
"include": [
- "src/**/*"
+ "/path/to/src/**/*"
],
"exclude": [...]
}
```
โ
**Listing files** yields the same output.
```sh
npx tsc --project tsconfig-rel.json --listFiles --noEmit
npx tsc --project tsconfig-abs.json --listFiles --noEmit
npx tsc --project tsconfig-cnf.json --listFiles --noEmit
```
List of files printed to stdout, containing
- `/path/to/`**`src/index.ts`**
- `/path/to/`**`node_modules/**/*.d.ts`** entries from npm dependency type declarations.
- `/path/to/`**`node_modules/typescript/lib/lib.decorators.d.ts`** (only in TS 5 of course)
- `/path/to/`**`node_modules/typescript/lib/lib.decorators.legacy.d.ts`** (only in TS 5 of course)
โ
**Building** yields the same `index.js` file content for all configs.
```sh
npx tsc --project tsconfig-rel.json
npx tsc --project tsconfig-abs.json
npx tsc --project tsconfig-cnf.json
```
### ๐ Expected behavior
`--showConfig` should show the same `files` for configs that build the same files and show the same listing for `--listFiles`.
### Additional information about the issue
`includeRe` in [`matchesSpecs`](https://github.com/microsoft/TypeScript/blob/a53c37d59aa0c20f566dec7e5498f05afe45dc6b/src/compiler/commandLineParser.ts#L2656) seems to be calculated differently for absolute / relative paths, resulting in all files being filtered out in [`convertToTSConfig`](https://github.com/microsoft/TypeScript/blob/a53c37d59aa0c20f566dec7e5498f05afe45dc6b/src/compiler/commandLineParser.ts#L2588C17-L2588C34).
I can't tell though which behaviour is actually expected. (keep files or filter files out?) | Possible Improvement | low | Minor |
2,581,249,517 | PowerToys | Explorer preview integration not working properly after upgrading Win10 to Win11 | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
File Explorer: Preview Pane
### Steps to reproduce
I had issues after upgrading to Win11. PDF, TXT, and SVG previews were not working anymore.
Here is what I had to do to fix it:
https://www.elevenforum.com/t/windows-11-pro-no-longer-shows-preview-of-pdf-files.19042/post-511236
In short: I had to DISable PDF and Source code file previews in Power Toys.
PDF, and TXT might have to do with conflicting preview handlers (Acrobat handler for PDF, Windows built-in handler for TXT).
Weird though that I had to DISable SVG previews and ENable SVG thumbnail previews for them to work again in the preview pane.
### โ๏ธ Expected Behavior
I was expecting that nothing would change due to the upgrade to Win11, as I had not installed any programs or apps afterwords.
### โ Actual Behavior
Previews for PDF, TXT and SVG had stopped working in the preview pane.
### Other Software
Likely Adobe Acrobat PDF preview handler, Win11 built-in TXT preview handler (if exists) | Issue-Bug,Needs-Triage | low | Minor |
2,581,250,670 | rust | Test LLVM update in CI | Changes to bootstrap break LLVM updates all the time. One of the CI jobs should perform a dummy update to the LLVM submodule and check that at least the boostrap tests still pass afterwards. | T-bootstrap,T-infra,C-bug | low | Minor |
2,581,271,735 | tensorflow | Gradients of tf.linalg.expm not supported with JIT compilation | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
tested 2.17 and 2.10, both have the issue
### Custom code
Yes
### OS platform and distribution
Ubuntu
### Mobile device
_No response_
### Python version
tested 3.9 and 3.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Gradients of `tf.linalg.expm` can not be computed with JIT compilation.
This is an issue, because tf 2.17 seems to have activated jit compilation for compiled models per default whereas earlier versions did not, breaking existing code.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
A = tf.Variable([[.4, 1.5], [.6, .1]], dtype=tf.float32)
@tf.function(jit_compile=True) #set jit_compile=False to make it work
def f(A):
with tf.GradientTape() as tape:
B = tf.linalg.expm(A)
return tape.gradient(B, A)
f(A)
```
### Relevant log output
```shell
2024-10-11 11:17:27.281304: W tensorflow/core/framework/op_kernel.cc:1840] OP_REQUIRES failed at xla_ops.cc:577 : INVALID_ARGUMENT: XLA compilation requires a fixed tensor list size. Set the max number of elements. This could also happen if you're using a TensorArray in a while loop that does not have its maximum_iteration set, you can fix this by setting maximum_iteration to a suitable value.
Stack trace for op definition:
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel_launcher.py", line 18, in <module>
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/traitlets/config/application.py", line 1075, in launch_instance
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel/kernelapp.py", line 739, in start
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/tornado/platform/asyncio.py", line 205, in start
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/asyncio/base_events.py", line 641, in run_forever
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/asyncio/base_events.py", line 1986, in _run_once
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/asyncio/events.py", line 88, in _run
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel/kernelbase.py", line 545, in dispatch_queue
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel/kernelbase.py", line 534, in process_one
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel/kernelbase.py", line 437, in dispatch_shell
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel/ipkernel.py", line 362, in execute_request
File "/home/beckerf/mambaforge/envs/learnMSAdev2/lib/python3.12/site-packages/ipykernel/kernelbase.py",
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,581,309,237 | svelte | `#each` may not update when passed `$derived` containing `$store` | ### Describe the bug
When a `$derived` that subscribes to a `$store` is passed to an `#each`, the contents of the each block may not properly update. That's because our static analysis (which was added in #12808) doesn't recognize that the array is coarse-grained
### Reproduction
[playground link](https://svelte-5-preview.vercel.app/#H4sIAAAAAAAAE21STY-bMBD9KyM30oIWkW2OCCL10ENO_QEhB2IPjbtgI3vIboX477UNBLa7QoBnPO_Nx5uB1bJBy7LzwFTVIsvYj65jCaO_nTfsHRtCZ1vdG-49ueVGdnQsVUmy7bQhGKqmORG2NgGpuD-NUBvdwlO6t6QN2icf7V-ulSW4SyuvDQYMFLATaOQdRbRbiFJXFaGJpDOgOIL_pzMqjh2Re_L9Won61ZOVAkHX8A0rfsvyq3EXwzbT-eWSEr7TmMEnP9e9cgnHQByg07dUJ_WBeOYN5499VDaU6SgAhlDwkiwYa4Zh78FfpMqvPZFWoBVvJH8thigOzU8zjQ7xeDwpbrBFReBdcMj3EybgnUytFrKWKFhGpscxeag66bAK-8duRV2EhDcjqXItJTBrAouU0yZMeno5Fb4HzKToIpxTc6GIzl7wQYoMvifLqDKoq8Y6ej-bDEr2VtkbCGlvaEuWwDylDF7G5AE_bOC-rS26eQW6IQj9-z94qS7xtHVzoXWvOEk33mWeUsQw-JBH9WnfiYowWkLXhZwjXexmR7X56YSMdsuWziEuSNYwuVMpoCjAp9pt1-D5eaYb4_lgkHqjVvpQmL-FsCklfVb3Mv4DgIyN4r8DAAA=)
(it works when you make each aware of the store:
[playground link](https://svelte-5-preview.vercel.app/#H4sIAAAAAAAACo1TwY6bMBD9lZEbaUGLyDZHBJF66CGnfkCcA7GHjbvGRvaQ3Qrx75UxBNrtoQfAHt57PM88BtYojZ4V54GZukVWsG9dxzJGv7qw8XfUhCxj3vZOhErphVMdHbnhpNrOOoKh1vpE2PoMlBFhNULjbAtP-d6TdeifAjpcwhpPcFdeXTVOHKhgJ9GpO8pktwjljdKELlGELVRHCM98ZqUpN0Fqh02DgpIkDYggbDXm2r4mW_mALverZ_OjJ68kgm3gC9biVpRXd-Rm2JLOL5ec8IPGAj7Vhe0NoRuDVqTGOzcn84fwrDut4f9OBrWfCjC9TkduAIYJsbiZNquFYR_U_-GlvPZE1oA1QivxVg2xS_N4kkM6Hk9GOGzREIQSHMp95Ex8lrHWStUolKwg1-OYPQISR7pm5Kff5mPJBLw7RfVVYwbzeGFJRQxVjEZIhsGPiRPDsXQKqodEcg4DH5Qs4Gu2pKeAptYeMwi9KYCz99rfQCp_Q89ZBnOXCngZswf9sKGHY23Z-g3ohiDt6190bi5pDPBstOmNIGXNo59KpjAEyMN93neyJkwW6JrtGclpGwrrvtfiluyWWMwQTqAaiOVcSagqCJ_abWPw_DzLjem8cEi9M6v8ZCy8hTH-Op-nexl_A6_r6U0KBAAA))
### Logs
_No response_
### System Info
```shell
Svelte version: 5.0.0-next.264
```
### Severity
annoyance | bug | low | Critical |
2,581,319,957 | langchain | Map Reduce Chain bug: "TypeError: unsupported operand type(s) for +=: 'NoneType' and 'NoneType'" | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.chains import (
StuffDocumentsChain, LLMChain, ReduceDocumentsChain, MapReduceChain
)
from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain
from langchain.text_splitter import CharacterTextSplitter
def baseChain(llm, chat_prompt, output_name):
base_chain = LLMChain(llm=llm['instance'], prompt=chat_prompt, output_key=output_name, verbose=True)
return base_chain
def stuffDocChain(reduce_llm_chain):
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_llm_chain,
document_variable_name="input_text",
)
return combine_documents_chain
# Combines and iteravely reduces the mapped documents
def combine_and_reduce(combine_documents_chain, map_max_tokens):
reduce_documents_chain = ReduceDocumentsChain(
# This is final chain that is called.
combine_documents_chain=combine_documents_chain,
# If documents exceed context for `combine_documents_chain`
collapse_documents_chain=combine_documents_chain,
# The maximum number of tokens to group documents into
token_max=map_max_tokens)
return reduce_documents_chain
# Combining documents by mapping a chain over them, then combining results with reduce chain
def map_reduce_step(map_llm_chain, reduce_documents_chain):
combine_documents = MapReduceDocumentsChain(
# Map chain
llm_chain=map_llm_chain,
# Reduce chain
reduce_documents_chain=reduce_documents_chain,
# The variable name in the llm_chain to put the documents in
document_variable_name="input_text")
return combine_documents
def mapReduceChain(combine_documents, chunk_size, chunk_overlap):
map_reduce = MapReduceChain(
combine_documents_chain=combine_documents,
text_splitter=CharacterTextSplitter(separator="", chunk_size=chunk_size, chunk_overlap=chunk_overlap),
verbose=True) # TODO add seperator protos
return map_reduce
def run_map_chain(llm, map_prompt, combine_prompt, chunk_size, chunk_overlap, inputs, output_name, map_max_tokens, thread_logs):
# Processing to required prompt templates
chunk_prompt_template, combine_prompt_template = map_process_prompt(map_prompt, combine_prompt)
# LLM to use in map and reduce stages
map_llm_chain = baseChain(llm, chunk_prompt_template, output_name)
reduce_llm_chain = baseChain(llm, combine_prompt_template, output_name)
# Takes a list of documents and combines them into a single string
combine_documents_chain = stuffDocChain(reduce_llm_chain)
# Combines and iteravely reduces the mapped documents
reduce_documents_chain = combine_and_reduce(combine_documents_chain, map_max_tokens)
# Combining documents by mapping a chain over them, then combining results with reduce chain
combine_documents = map_reduce_step(map_llm_chain, reduce_documents_chain)
map_chain = mapReduceChain(combine_documents, chunk_size, chunk_overlap)
# Calculate input tokens
total_input = str(chunk_prompt_template) + str(combine_prompt_template) + str(inputs)
total_tokens = num_tokens_from_string(total_input, "cl100k_base")
logger.info(f"Total input tokens are: {total_tokens}")
with get_openai_callback() as cb:
start_time = time.time()
output = map_chain.__call__(inputs)
time_taken = time.time() - start_time
logger.info(f"Time take for LLM Request to respond :{time_taken} seconds")
logger.info(f"Total Tokens: {cb.total_tokens} key: {llm['key_index']}")
logger.info(f"Prompt Tokens: {cb.prompt_tokens} key: {llm['key_index']}")
logger.info(f"Completion Tokens: {cb.completion_tokens} key: {llm['key_index']}")
if isinstance(output["output_text"], dict):
output_str = json.dumps(output["output_text"])
else:
output_str = output["output_text"]
formatted_input = total_input
model_cost_per_token = get_model_cost(llm)
return output
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last): File "/var/task/handler/langchain_map_reduce_handler.py", line 151, in run output = run_map_chain(llm, map_prompt, combine_prompt, chunk_size, chunk_overlap, inputs, output_name, File "/var/task/handler/langchain_map_reduce_handler.py", line 83, in run_map_chain output = map_chain.__call__(inputs) File "/var/task/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/var/task/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/var/task/langchain/chains/base.py", line 163, in invoke raise e File "/var/task/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/var/task/langchain/chains/mapreduce.py", line 105, in _call outputs = self.combine_documents_chain.run( File "/var/task/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/var/task/langchain/chains/base.py", line 595, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/var/task/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/var/task/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/var/task/langchain/chains/base.py", line 163, in invoke raise e File "/var/task/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/var/task/langchain/chains/combine_documents/base.py", line 137, in _call output, extra_return_dict = self.combine_docs( File "/var/task/langchain/chains/combine_documents/map_reduce.py", line 226, in combine_docs map_results = self.llm_chain.apply( File "/var/task/langchain/chains/llm.py", line 250, in apply raise e File "/var/task/langchain/chains/llm.py", line 247, in apply response = self.generate(input_list, run_manager=run_manager) File "/var/task/langchain/chains/llm.py", line 138, in generate return self.llm.generate_prompt( File "/var/task/langchain_core/language_models/chat_models.py", line 560, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/var/task/langchain_core/language_models/chat_models.py", line 426, in generate llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File "/var/task/langchain_openai/chat_models/base.py", line 504, in _combine_llm_outputs overall_token_usage[k] += vTypeError: unsupported operand type(s) for +=: 'NoneType' and 'NoneType' | [ERROR] 2024-10-10T15:56:01.430Z 6a9966a6-e97c-4860-bac6-8be0464b1bb2 unsupported operand type(s) for +=: 'NoneType' and 'NoneType' Traceback (most recent call last): File "/var/task/handler/langchain_map_reduce_handler.py", line 151, in run output = run_map_chain(llm, map_prompt, combine_prompt, chunk_size, chunk_overlap, inputs, output_name, File "/var/task/handler/langchain_map_reduce_handler.py", line 83, in run_map_chain output = map_chain.__call__(inputs) File "/var/task/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/var/task/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/var/task/langchain/chains/base.py", line 163, in invoke raise e File "/var/task/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/var/task/langchain/chains/mapreduce.py", line 105, in _call outputs = self.combine_documents_chain.run( File "/var/task/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/var/task/langchain/chains/base.py", line 595, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/var/task/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/var/task/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/var/task/langchain/chains/base.py", line 163, in invoke raise e File "/var/task/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/var/task/langchain/chains/combine_documents/base.py", line 137, in _call output, extra_return_dict = self.combine_docs( File "/var/task/langchain/chains/combine_documents/map_reduce.py", line 226, in combine_docs map_results = self.llm_chain.apply( File "/var/task/langchain/chains/llm.py", line 250, in apply raise e File "/var/task/langchain/chains/llm.py", line 247, in apply response = self.generate(input_list, run_manager=run_manager) File "/var/task/langchain/chains/llm.py", line 138, in generate return self.llm.generate_prompt( File "/var/task/langchain_core/language_models/chat_models.py", line 560, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/var/task/langchain_core/language_models/chat_models.py", line 426, in generate llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File "/var/task/langchain_openai/chat_models/base.py", line 504, in _combine_llm_outputs overall_token_usage[k] += v TypeError: unsupported operand type(s) for +=: 'NoneType' and 'NoneType'
ย | 2024-10-10T15:56:01.433Z | ย
<br class="Apple-interchange-newline">```
### Description
MaP Reduce chain breaks internally during token calculation intermitently
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
> Python Version: 3.9.19 (main, Jul 21 2024, 14:15:50)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.17
> langchain_aws: 0.1.4
> langchain_openai: 0.1.4
> langchain_text_splitters: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ๐ค:bug,investigate | low | Critical |
2,581,344,787 | pytorch | Using torch from multiple processes is slow | ### ๐ Describe the bug
What is the correct way to use torch from multiple processes. I would expect the code in the spawned process below would run at the same speed throughout, but it is much slower whilst the main process is also doing torch things. This is a toy example, but my real code has a similar problem.
```python
# type: ignore
from contextlib import contextmanager
from time import perf_counter
from typing import Iterator
import torch
import torch.multiprocessing as mp
mp.set_start_method("spawn", force=True)
@contextmanager
def timer(name: str) -> Iterator[None]:
start = perf_counter()
yield
end = perf_counter()
print(f"{name}: {end - start:.5f}s")
def do_stuff():
a = torch.empty((1_00_000, 2_000))
b = torch.empty((1_00_000, 2_000))
a.copy_(b[torch.randperm(b.size(0))])
def proc():
for _ in range(20):
with timer("p"):
# NOTE this is slow while we are do_stuff'ing in the main process
do_stuff()
if __name__ == "__main__":
torch.set_num_threads(1)
p = mp.Process(target=proc)
p.start()
torch.set_num_threads(5)
for _ in range(30):
with timer("m"):
do_stuff()
p.join()
```
My machine has 30 cores so I'd expect everything to work independantly of each other (as I'm only using 6 threads), but what I see is when `do_stuff` is called from `proc` while the main process is still in the `m` loop, then it takes about 1.6s, but when the `m` loop finishes it speeds up to 0.8s.
```
m: 0.48198s
p: 1.62041s
m: 0.52251s
m: 0.54056s
m: 0.61196s
p: 1.83045s
m: 0.55137s
m: 0.53094s
m: 0.57954s
m: 0.50424s
p: 1.73075s
m: 0.56716s
m: 0.51699s
m: 0.50080s
p: 1.68981s
m: 0.53625s
m: 0.49879s
m: 0.49690s
p: 1.68669s
m: 0.55190s
m: 0.54055s
m: 0.52681s
p: 1.52235s
################ m loop finished
p: 0.83526s
p: 0.81527s
p: 0.81331s
p: 0.82571s
p: 0.81018s
p: 0.77474s
p: 0.75070s
p: 0.83396s
p: 0.89753s
p: 0.78945s
p: 0.79359s
p: 0.75883s
```
### Versions
```
>>> print(torch.__config__.show())
PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 12.1
- NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
- CuDNN 8.9.2
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
```
cc @VitalyFedyunin | module: multiprocessing,triaged | low | Critical |
2,581,438,308 | deno | Deno 2 compile doesn't work on sveltekit (error: Module not found runtime/control.js) | Version: deno 2.0.0 (stable, release, x86_64-unknown-linux-gnu) (archlinux)
v8 12.9.202.13-rusty
typescript 5.6.2
Running `deno compile` on a sveltekit app returns a `Module not found runtime/control.js` error.
## Steps to reproduce
1. Run `deno -A npm:create-svelte@latest my-app`
1.1. Click on Sveltekit demo app
1.2. Yes using typescript syntax
1.3. Selecting all the 5 options presented in the next step using space bar ) (Add eslint , prettier , vitest etc.)
2. Run `cd my-app`
3. Run `deno install`
4. Run `deno run dev` (shows the expected sveltekit demo app at localhost:5173)
5. Run `deno install npm:svelte-adapter-deno`
6. Update `svelte.config.js` file to use Deno adapter:
```js
// svelte.config.js
import adapter from 'svelte-adapter-deno';
export default {
kit: {
adapter: adapter()
}
};
```
7. Run `deno run build` to produce build folder and then cd into it (interestingly the last time I ran this it produced a mod.ts file but this time it didn't produce mod.ts I am not exactly sure what has changed since the last time) (this is extremely weird)
8. but still running deno run index.js and giving it the permissions runs the sveltekit demo app
9. Run `deno compile build/server/index.js`
Output:
```sh
error: Module not found "file:///home/test/Projects/deno/my-app/build/runtime/control.js".
at file:///home/test/Projects/deno/my-app/build/server/index.js:1345:23
```
| bug,compile | medium | Critical |
2,581,446,532 | deno | Deno ignores `ResourceLimits` for `node:worker_threads` | Version: Deno v2.0.0 (tested as far back as v1.41.3)
Below is a simplified version of this test:
* https://github.com/denoland/node_test/blob/main/test/parallel/test-worker-resource-limits.js
```js
import worker_threads from 'node:worker_threads';
const worker = new worker_threads.Worker(new URL("./worker.mjs", import.meta.url), {
resourceLimits: {
maxYoungGenerationSizeMb: 4,
maxOldGenerationSizeMb: 16,
codeRangeSizeMb: 16,
stackSizeMb: 1,
},
});
```
And here's `./worker.mjs`:
```js
import { resourceLimits } from 'node:worker_threads';
console.log(resourceLimits);
```
In **Node.js** it logs this:
```
resourceLimits: {
maxYoungGenerationSizeMb: 4,
maxOldGenerationSizeMb: 16,
codeRangeSizeMb: 16,
stackSizeMb: 1
}
```
And a simple large string allocation test shows that the limit is respected - i.e. it crashes with `FATAL ERROR: Reached heap limit Allocation failed` if the limit is hit.
In **Deno** it logs the default values:
```
resourceLimits: {
maxYoungGenerationSizeMb: 48,
maxOldGenerationSizeMb: 2048,
codeRangeSizeMb: 0,
stackSizeMb: 4
}
```
And note that the `16` MB value is not respected. I.e. it seems that the `resourceLimits` log in the log shown above are "accurate" to the *actual* limits that the thread is held to, but obviously these are not the limits we asked for when creating the worker.
## Related Context:
* https://docs.deno.com/api/node/worker_threads/
* https://docs.deno.com/api/node/worker_threads/~/ResourceLimits | bug,node compat | low | Critical |
2,581,459,246 | pytorch | `nanmean`: `out=` function does not work with dynamo. | ### ๐ Describe the bug
`out=` variant of `nanmean` operation does not work with dynamo (eager backend).
```python
>>> inp = torch.rand(10)
>>> out = torch.tensor(.0)
>>> torch.nanmean(inp, out=out)
tensor(...)
>>> torch.compile(torch.nanmean, backend="eager")(inp, out=out)
Traceback (most recent call last):
File "examples/nanmean.py", line 7, in <module>
torch.compile(torch.nanmean, backend="eager")(inp, out=out)
File "torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1350, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 1141, in __call__
result = self._inner_convert(
File "torch/_dynamo/convert_frame.py", line 543, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 963, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 694, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 727, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 656, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2794, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 996, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 908, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 614, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1693, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 843, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 169, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "torch/_dynamo/variables/builder.py", line 2057, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "torch/_dynamo/variables/builder.py", line 2144, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "torch/_dynamo/utils.py", line 2134, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "torch/_dynamo/utils.py", line 2069, in get_fake_value
ret_val = wrap_fake_exception(
File "torch/_dynamo/utils.py", line 1626, in wrap_fake_exception
return fn()
File "torch/_dynamo/utils.py", line 2070, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "torch/_dynamo/utils.py", line 2202, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "torch/_dynamo/utils.py", line 2184, in run_node
return node.target(*args, **kwargs)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method nanmean of type object at 0x7f4e342e90a0>(*(FakeTensor(..., size=(10,)),), **{'out': FakeTensor(..., size=())}):
Cannot access data pointer of Tensor (e.g. FakeTensor, FunctionalTensor). If you're using torch.compile/export/fx, it is likely that we are erroneously tracing into a custom kernel. To fix this, please wrap the custom kernel into an opaque custom op. Please see the following for details: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
from user code:
File "torch/_dynamo/external_utils.py", line 31, in inner
return fn(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.5.0a0+git7128504
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,enhancement,oncall: pt2,module: inductor | low | Critical |
2,581,460,189 | vscode | [FEATURE] Softwrap on literal `\n` | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As developer I want to be able to see multi-line literals on multiple line.
So this declaration

could be shown as (if soft-wrapping is enabled)

Ideally, VSCode shall provide an extension API to let language-specific extension provide they language-specific softwrap candidate.
Since it's not the case, the change shall be made in the monaco line-wrapping createLineBreaks method of MonospaceLineBreaksComputerFactory | feature-request,editor-wrapping | low | Major |
2,581,462,624 | angular | Components used with attribute selector are not recognized by the language service | ### Which @angular/* package(s) are the source of the bug?
language-service
### Is this a regression?
No
### Description
I've noticed for some time now that within VSC the language service is hit or miss on recognizing the bindings of components used with an attribute selector.
- However, some use-case scenarios with the same setup are recognized.
- **In all cases the application builds without error.**
Here is my latest example
```
<li
lib-tag
[ngClass]="{ 'border-watermelon-6': match?.skills?.includes(tag?.id) }"
[iconLeft]="match?.skills?.includes(tag?.id) ? 'check-circle' : ''" <-- Can't bind to 'iconLeft' since it isn't a known property of 'li'
[name]="tag?.name" <-- Can't bind to 'name' since it isn't a known property of 'li'
></li>
```
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
Can't bind to 'iconLeft' since it isn't a known property of 'li'
### Please provide the environment you discovered this bug in (run `ng version`)
_No response_
### Anything else?
Node v22.9.0
Angular * 18.2.8 | area: language-service | low | Critical |
2,581,467,125 | godot | Changing bone euler rotation sets non-normalized quaternion | ### Tested versions
Reproducible in:
* v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Debian GNU/Linux trixie/sid trixie - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7600 (RADV NAVI33) - AMD Ryzen 5 7600 6-Core Processor (12 Threads)
### Issue description
If one has a model rigged with a `Skeleton3D`, and changes a bone's rotation by typing values in the temporary Euler inputs, the resultant quaternion will often be non-normalized.
While this appears to work just fine on the skeleton's pose, it will cause an error (and reset of bone rotation) if animation keyframes are played using such a non-normalized quaternion:
> AnimationPlayer: 'AnimationPlayer', Animation: '', 3D Rotation Track: 'characterMedium/Root/Skeleton3D:RightUpLeg' contains unnormalized Quaternion key.
> The end quaternion (-0.968029, 0.072789, 0.005684, 0.237784) must be normalized.
This also happens if you manually alter the values of the quaternion's inputs. The quaternion ends up non-normalized and fails during animations, but works fine for statically posing the skeleton.
As a possible workaround, if you mouse-drag the euler values (click-hold on the x|y|z label, and drag the mouse left or right to gradually change the values), instead of manually entering values, this seems to cause the resultant quaternion to be normalized.
Though TBH I'm not sure if the issue here should be "editing bone rotation can produce non-normalized quaternions" or "bone animation tracks reject non-normalized quaternions, even though they work for static poses". :confused:
### Steps to reproduce
The attached MRP includes a rigged model and an `AnimationPlayer`. The included animation sets the rotation for both legs, but only the left leg works.
You can experiment with ways of changing the rotation value of the `RightUpLeg` bone, setting its keyframe in the animation, and then seeing if the animation plays.
### Minimal reproduction project (MRP)
[animate-quaternion.zip](https://github.com/user-attachments/files/17343575/animate-quaternion.zip)
| bug,topic:editor,topic:animation | low | Critical |
2,581,491,008 | deno | Feat: Add CLI flag to skip having to add import attributes | Trying out Deno in Node projects that import `.json` files currently requires the developer to add import attributes.
```ts
// Node
import openings from "../resources/openings.json";
// Deno
import openings from "../resources/openings.json" with { type: "json" };
```
We should have a flag that allows you to import `.json` files directly without import attributes. Or maybe only require import attributes for remote specifiers. | suggestion,needs discussion | low | Minor |
2,581,498,671 | flutter | [camera] White frames at the start of the recording on iPhone 12 | ### Steps to reproduce
Start a recording. You will get a few white frames at the start of the video - see the videos bellow.
To be clear, this is specific to iPhone 12. It works fine on every other iPhone and on Android.
We are using `camera: 0.11.0+2`
### Expected results
I expect the white frames to not be there
### Actual results
The first frames are white
### Code sample
<details open><summary>Code sample</summary>
```dart
late CameraController _controller;
late Future<void> _initializeControllerFuture;
@override
void initState() {
super.initState();
_controller = CameraController(
widget.args.cameras.firstWhere((camera) => camera.lensDirection == CameraLensDirection.back),
ResolutionPreset.veryHigh,
);
_initializeControllerFuture = _controller.initialize();
_controller.setFlashMode(FlashMode.off);
}
@override
Widget build (BuildContext context) {
return FutureBuilder(
future: _initializeControllerFuture,
builder: (context, snapshot) {
final initialized = snapshot.connectionState == ConnectionState.done && _controller.value.isInitialized;
return initialized
? CameraPreview(_controller)
: const Center(child: CircularProgressIndicator());
},
);
}
```
We get the available cameras in the parent using
```dart
Future<List<CameraDescription>> availableCameras() async {
return CameraPlatform.instance.availableCameras();
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/fea4f644-ab67-4025-8693-d5218f46609e
https://github.com/user-attachments/assets/f2232290-3ddb-4f89-aea8-63ce960ac085
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale en-BR)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.0)
[โ] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google
Chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Android Studio (version 2024.2)
[โ] IntelliJ IDEA Community Edition (version 2024.2.3)
[โ] Connected device (2 available)
[โ] Network resources
```
</details>
| e: device-specific,platform-ios,p: camera,package,P2,team-ios,triaged-ios | low | Major |
2,581,529,788 | PowerToys | File Locksmith no active | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
File Locksmith
### Steps to reproduce
Running Powertoys as an administrator and with File Locksmith enabled, it never appears in the file explorer's context menu, no matter if I press the shift key or not.
I have tried with several files and folders and also restarted the PC
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
Windows 10 22H2 | Issue-Bug,Needs-Triage | low | Minor |
2,581,548,651 | neovim | Lua: writefile() "fast", Blob argument should accept Lua string | ### Problem
1. `writefile()` is not marked as `fast`
2. writefile() does not appear to accept Lua strings for the first arg (which accepts Blob type). AFAIK, we marshal Lua strings to Blobs in the API already, so I would expect this here too.
```
nvim --clean
:lua vim.fn.writefile('foo\nbar', 'foo.txt', 'b')
5108: Error executing lua Vim:E475: Invalid argument: writefile() first argument must be a List or a Blob
stack traceback:
[C]: in function 'writefile'
[string ":lua"]:1: in main chunk
```
### Expected behavior
1. do the above
2. create a dummy help tag at `:help vim.fs.write()` which points users to writefile().
- update writefile() docs to show a quickstart example near the top. | enhancement,api,vimscript,complexity:low,lua | low | Critical |
2,581,554,356 | tauri | [bug] Disable close, move, resize, maximize and minimize functions for parent of file dialog on Linux. | ### Describe the bug
Close, move, resize, maximize and minimize functions for parent of dialog is not disabled on Linux.
dialog code:
```rust
#[tauri::command]
async fn download_files(
urls: Vec<String>,
default_directory: bool,
app_handle: tauri::AppHandle,
window: tauri::WebviewWindow,
) -> Result<String, String> {
...
let dialog_result = match app_handle
.dialog()
.file()
.set_parent(&window)
.blocking_pick_folder()
{
Some(val) => val,
None => return Err("Dialog was closed".to_string()),
};
dialog_result.into_path().unwrap()
...
}
```
Cargo.toml
```toml
...
[build-dependencies]
tauri-build = { version = "2", features = [] }
[dependencies]
once_cell = "1.19.0"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tauri = { version = "2", features = [ "devtools", "webview-data-url"] }
reqwest = { version = "0.11", features = ["blocking", "json"] }
tauri-plugin-cli = "2"
tauri-plugin-dialog = "2"
tauri-plugin-shell = "2.0.1"
tauri-plugin-window-state = { version = "2.0.0" }
tauri-plugin-fs = { version = "2.0.0", features = ["watch"] }
tauri-plugin-single-instance = { version = "2.0.0" }
tauri-plugin-process = "2.0.0"
dunce = "1.0.4"
base64 = "0.21.7"
anyhow = "1.0.86"
open = "5.3.0"
content_disposition = "0.4.0"
urlencoding = "2.1.3"
[target.'cfg(not(windows))'.dependencies]
openssl-sys = { version = "=0.9.72", features = ["vendored"] }
openssl = { version = "=0.10.38" }
...
```
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
Dev machine tauri info
[โ] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
โ WebView2: 129.0.2792.79
โ MSVC: Visual Studio Community 2022
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 18.19.0
- pnpm: 9.5.0
- npm: 10.2.3
[-] Packages
- tauri ๐ฆ: 2.0.2
- tauri-build ๐ฆ: 2.0.1
- wry ๐ฆ: 0.44.1
- tao ๐ฆ: 0.30.3
- tauri-cli ๐ฆ: 2.0.1
- @tauri-apps/api ๎: not installed!
- @tauri-apps/cli ๎: 2.0.1 (outdated, latest: 2.0.2)
[-] Plugins
- tauri-plugin-fs ๐ฆ: 2.0.1
- @tauri-apps/plugin-fs ๎: not installed!
- tauri-plugin-cli ๐ฆ: 2.0.1
- @tauri-apps/plugin-cli ๎: not installed!
- tauri-plugin-single-instance ๐ฆ: 2.0.1
- @tauri-apps/plugin-single-instance ๎: not installed!
- tauri-plugin-process ๐ฆ: 2.0.1
- @tauri-apps/plugin-process ๎: not installed!
- tauri-plugin-window-state ๐ฆ: 2.0.1
- @tauri-apps/plugin-window-state ๎: not installed!
- tauri-plugin-dialog ๐ฆ: 2.0.1
- @tauri-apps/plugin-dialog ๎: not installed!
- tauri-plugin-shell ๐ฆ: 2.0.1
- @tauri-apps/plugin-shell ๎: not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../web/dist
- devUrl: http://localhost:3000/
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,581,665,143 | vscode | font ligatures that span several textmate scopes assume the font settings of last character (last component of the composite symbol) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2
- OS Version: Windows 10
Steps to Reproduce:
1. Pick a font family that supports ligatures (like "Fira Code") and turn on font ligatures in settings.
2. See that characters that belong to different syntax scopes but happen to form a ligature are not colored as expected โ examples below.
### Proposed solution: disallow ligatures to span across textmate scopes.
Python syntax: `]#` is a ligature in Fira Code, but `#` starts a comment. Confusingly, `]` is also colored as comment.

HTML syntax: `/>` is a ligature in Fira Code, but `/` is invalid to self-close a tag. Confusingly, `/` is not colored red to help see the error.

| font-rendering,under-discussion | low | Critical |
2,581,711,545 | ui | [feat]: Json field. A dynamic (sub) form from a json schema | ### Feature description
dynamic form from json schema
### Affected component/components
form
### Additional Context
given a json schema, create a form and save the output into a json
Perhaps a simple JsonField component inside a form where the field is a json or stringified version of a json which needs to confirm a json schema. Example: a configuration field.
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,581,715,641 | PowerToys | Advanced paste option - โmenu follows mouse cursorโ | ### Description of the new feature / enhancement
I love the tool and have the key stroke, macro programmed to a mouse gesture (i.e. [Strokes Plus](https://www.strokesplus.net/)). It makes generating the menu and selection of the desired option much quicker. It would be fantastic if there was an option or maybe a second key stroke assignment that caused the **Advanced Paste** menu to generate at the mouse cursor.
### Scenario when this would be used?
When pulling the menu up, currently, via a mouse gesture, I might be in a completely different space on the screen and moving my mouse back over to the middle of the screen to find the menu and select the option I want breaks focus. Frankly, even if you donโt use mouse gestures, or assign it to a mouse button, itโs nice to have The menu pop up at the cursor of the application youโre actively in.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,581,716,545 | langchain | Chroma similarity_search and similarity_search_with_score do not return any results | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi,
I am new to langchain and chroma. I am trying to insert data into chromadb and search it. There is no issue with data. I tried the same search in creating a knowledge base in bedrock. I don't get any error. The database created (data_level0.bin is about 6.3 MB) but while doing a search, it returns empty results. Following is the code to insert the data.
```from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import os
CHROMA_PATH = "data/chroma_wp"
os.environ["OPENAI_API_KEY"] = "sk-"
loader = TextLoader("books/war_and_peace.txt", encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200, separator="\n")
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorStore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=CHROMA_PATH)```
Following is the code i am using to search.
```import chromadb
import os
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
os.environ["OPENAI_API_KEY"] = "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"
CHROMA_PATH = "data/chroma_wp"
embeddings = OpenAIEmbeddings()
vectorStore = Chroma(persist_directory=CHROMA_PATH, embedding_function=embeddings)
#vectorStore.delete()
print(vectorStore)
results = vectorStore.similarity_search("Who is Andrew?", k=3)
#vectorStore.similarity_search_with_score("Who is Andrew?", k=3)
print(results)
```
I get empty results.
Following are the packages i am using
langchain 0.3.1
langchain-chroma 0.1.4
langchain-community 0.3.1
langchain-core 0.3.6
langchain-experimental 0.3.2
langchain-openai 0.2.1
langchain-text-splitters 0.3.0
chroma-hnswlib 0.7.6
chromadb 0.5.12
### Error Message and Stack Trace (if applicable)
No exception. Just empty results.
### Description
Hi,
I am new to langchain and chroma. I am trying to insert data into chromadb and search it. There is no issue with data. I tried the same search in creating a knowledge base in bedrock. I don't get any error. The database created (data_level0.bin is about 6.3 MB) but while doing a search, it returns empty results. Following is the code to insert the data.
```from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import os
CHROMA_PATH = "data/chroma_wp"
os.environ["OPENAI_API_KEY"] = "sk-"
loader = TextLoader("books/war_and_peace.txt", encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200, separator="\n")
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorStore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=CHROMA_PATH)```
Following is the code i am using to search.
```import chromadb
import os
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
os.environ["OPENAI_API_KEY"] = "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"
CHROMA_PATH = "data/chroma_wp"
embeddings = OpenAIEmbeddings()
vectorStore = Chroma(persist_directory=CHROMA_PATH, embedding_function=embeddings)
#vectorStore.delete()
print(vectorStore)
results = vectorStore.similarity_search("Who is Andrew?", k=3)
#vectorStore.similarity_search_with_score("Who is Andrew?", k=3)
print(results)
```
I get empty results.
Following are the packages i am using
langchain 0.3.1
langchain-chroma 0.1.4
langchain-community 0.3.1
langchain-core 0.3.6
langchain-experimental 0.3.2
langchain-openai 0.2.1
langchain-text-splitters 0.3.0
chroma-hnswlib 0.7.6
chromadb 0.5.12
### System Info
System Information
------------------
OS: Windows
OS Version: 10.0.22631
Python Version: 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
langchain_core: 0.3.6
langchain: 0.3.1
langchain_community: 0.3.1
langsmith: 0.1.129
langchain_chroma: 0.1.4
langchain_experimental: 0.3.2
langchain_openai: 0.2.1
langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
langgraph
langserve
Other Dependencies
------------------
aiohttp: 3.10.6
async-timeout: 4.0.3
chromadb: 0.5.12
dataclasses-json: 0.6.7
fastapi: 0.115.0
httpx: 0.27.2
jsonpatch: 1.33
numpy: 1.26.4
openai: 1.50.1
orjson: 3.10.7
packaging: 24.1
pydantic: 2.9.2
pydantic-settings: 2.5.2
PyYAML: 6.0.2
requests: 2.32.3
SQLAlchemy: 2.0.35
tenacity: 8.5.0
tiktoken: 0.7.0
typing-extensions: 4.12.2
| โฑญ: vector store,investigate | low | Critical |
2,581,733,077 | storybook | [Bug]: missing props in controls tab when using pnpm | ### Describe the bug
We are trying to create a story for a custom themed mui Button but are unable to get any ButtonProps from mui in the Story.
### Reproduction link
https://codesandbox.io/p/devbox/syq2z7
### Reproduction steps
1. create nextJs project
1. install mui
2. install storybook
3. follow setup guide for mui -> https://storybook.js.org/recipes/@mui/material
4. create story for mui component
5. enable autodocs
6. start storybook
7. -> mui component story does not show mui component propertys
### System
Storybook Environment Info:
System:
OS: macOS 15.0.1
CPU: (12) arm64 Apple M2 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.15.0 - ~/Library/pnpm/node
npm: 10.7.0 - ~/Library/pnpm/npm
pnpm: 9.11.0 - ~/Library/pnpm/pnpm <----- active
Browsers:
Chrome: 129.0.6668.91
Edge: 129.0.2792.89
Safari: 18.0.1
npmPackages:
@storybook/addon-essentials: ^8.3.5 => 8.3.5
@storybook/addon-interactions: ^8.3.5 => 8.3.5
@storybook/addon-links: ^8.3.5 => 8.3.5
@storybook/addon-onboarding: ^8.3.5 => 8.3.5
@storybook/addon-themes: ^8.3.5 => 8.3.5
@storybook/blocks: ^8.3.5 => 8.3.5
@storybook/nextjs: ^8.3.5 => 8.3.5
@storybook/react: ^8.3.5 => 8.3.5
@storybook/test: ^8.3.5 => 8.3.5
eslint-plugin-storybook: ^0.9.0 => 0.9.0
storybook: ^8.3.5 => 8.3.5
### Additional context
_No response_ | bug,argtypes,pnpm,docgen | low | Critical |
2,581,758,431 | PowerToys | Streaming a FancyZones zone over Teams or Discord as Application | ### Description of the new feature / enhancement
Being able to stream a FancyZone over Teams or Discord as an application. This feature would allow users to choose and stream only a specific portion of their monitor, improving readability and focus.
### Scenario when this would be used?
In cases where users have ultrawide screens, streaming the entire screen can result in unreadable content for others in the meeting, especially those with smaller or 16:9 aspect ratio monitors. By allowing the user to stream just a specific FancyZone region, they can ensure better visibility. This is particularly helpful during meetings or presentations when users need to share only a part of their screen without losing the important details due to the smaller resolution on the receiverโs end.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,581,761,229 | deno | Suggestion: UnixSocket option for `deno serve` command | Would be more convenient if `deno serve` command could listen on UnixSocket like `Deno.serve()` method :)
For example, like this:
- `--socket <file>` or `--path <file>` ... Path to socket file.
(Which name is better?)
```sh
deno serve --socket ./server.sock ./server.ts
``` | suggestion,serve | low | Minor |
2,581,793,593 | deno | Add coverage guided fuzzing into to Deno's tool kit. | I really appreciate all the effort that has been put into having all tooling natively supported by Deno. The only thing that I feel is missing at this point is a built-in coverage guided fuzzing solution. Golang for example included one in their standard library a couple years ago as part of their 1.18.0 release and is something my company uses heavily. I would like to be able to write server code with Deno, and the last sticking point I have at this point is "how do I fuzz test?", so hopefully others also find this a worthwhile addition. | cli,suggestion,testing | low | Minor |
2,581,816,026 | stable-diffusion-webui | [Bug]: OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()` | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`
### Steps to reproduce the problem
I encountered this error when starting Stable Diffusion WebUI.
with the arguments:
--api --listen --cors-allow-origins '*' --port=7860 --no-gradio-queue --skip-torch-cuda-test --nowebui.
### What should have happened?
If I restart the Stable Diffusion WebUI, it works fine. However, since it's currently running in a production environment, I can't keep restarting it like that. Can you tell me why?
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
I am running on an EC2 Linux environment.
### Console logs
```Shell
OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`
```
### Additional information
_No response_ | bug-report | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.