id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
|---|---|---|---|---|---|---|
2,786,021,953
|
react
|
Bug: react-hooks/exhaustive-deps false postive when use `isPending` in `useActionState`
|
<!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 19.0.0, eslint-plugin-react-hooks: 5.1.0
## Steps To Reproduce
1. Create a `useActionState` value and use the third value in the return value
2. Use dispatch function in `useEffect`
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
```tsx
import type React from 'react';
import { useActionState, useEffect } from 'react';
export const Example: React.FC = () => {
const [_, dispatch, _isPending] = useActionState(() => 0, 0);
useEffect(() => {
dispatch();
}, []); // React Hook useEffect has a missing dependency: 'dispatch'. Either include it or remove the dependency array.
const [_2, dispatch2] = useActionState(() => 0, 0);
useEffect(() => {
dispatch2();
}, []); // ok
return null;
};
```
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
```
React Hook useEffect has a missing dependency: 'query'. Either include it or remove the dependency array.
```
## The expected behavior
No lint errors.
------
Only handled 2 elements.
https://github.com/facebook/react/blob/056073de4c50b65807cd77ae6715c9ea8ee64277/packages/eslint-plugin-react-hooks/src/ExhaustiveDeps.js#L273
|
Status: Unconfirmed
|
low
|
Critical
|
2,786,033,280
|
rust
|
Async function calls not optimized out
|
Async function calls that are behind simple `if false` (or `if cfg!()` that aren't active) can still affect the binary, even though non-async function calls are optimized out.
I tried this code:
```rust
#[tokio::main]
async fn main() {
if false {
async fn never_panic() {
panic!("disco");
}
never_panic().await;
}
if false {
panic!("antidisestablishmentarianism");
}
println!("All done!");
}
```
And ran `cargo rustc --release -- -C opt-level=s` (this behavior also appears on opt-level 3), and checked for the resulting strings in the binary, e.g., `strings target/release/minimal | grep -e disco` or `strings target/release/minimal | grep -e antidis`.
As expected, the string "antidisestablishmentarianism" never appears in the binary, but "disco" always does, even though functionally, they should be equivalent.
If I had to guess why this could happen, it's because `if false { ... }` still internally generates some type for the `impl Future`, which isn't getting optimized out for some reason.
In a more complex app, we were finding that this was also affecting runtime behavior of the code (maybe due to there being a lot of code and crates not optimized out, and that chaining into additional effects, but it was hard to isolate this behavior).
### Meta
This bug exists in nightly going back to 2024-09-30 at least, but also on the latest.
It happens on both x86-64 and riscv32, at least.
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
|
T-compiler,A-async-await,E-needs-mcve,C-optimization
|
low
|
Critical
|
2,786,049,104
|
deno
|
Deno LSP suggests incorrect imports
|
Version: Deno 2.1.5+5a39f2f
Hey,
I'm not 100% sure if this is due to Deno's LSP or jetbrains. I am aware that the Jetbrains deno plugin is pretty bad, so sorry if this is unrelated to Deno.
When I am trying to import QueryClientProvider from react-query, I get some odd suggestions for where to import it from, as show in the screenshot:

Here is my deno.json:
```{
"tasks": {
"client:start": "deno run -A --node-modules-dir=auto npm:vite",
"server:start": "deno run -A --node-modules-dir --watch server/main.ts",
"dev": "deno task client:start & deno task server:start"
},
"imports": {
"@deno/vite-plugin": "npm:@deno/vite-plugin@^1.0.2",
"@std/assert": "jsr:@std/assert@1",
"@tanstack/react-query": "npm:@tanstack/react-query@^5.64.1",
"hono": "npm:hono@^4.6.16",
"@types/react": "npm:@types/react@^19.0.6",
"@vitejs/plugin-react": "npm:@vitejs/plugin-react@^4.3.4",
"autoprefixer": "npm:autoprefixer@^10.4.20",
"postcss": "npm:postcss@^8.4.49",
"react": "npm:react@^19.0.0",
"react-dom": "npm:react-dom@^19.0.0",
"tailwindcss": "npm:tailwindcss@^3.4.17",
"vite": "npm:vite@^6.0.7"
},
"types": ["types.d.ts", "vite/client"],
"compilerOptions": {
"types": ["react", "react-dom", "@types/react"],
"lib": ["dom", "dom.iterable", "deno.ns"],
"jsx": "react-jsx",
"jsxImportSource": "react",
"jsxImportSourceTypes": "@types/react"
},
"fmt":{
"useTabs": true,
"lineWidth": 120,
"indentWidth": 4,
"semiColons": true,
"singleQuote": true,
"include": ["client", "server"],
"exclude": ["client/.vite"]
}
}
```
Again, sorry if this is not due to the Deno project itself, or if I have a misconfiguration.
|
bug,lsp
|
low
|
Minor
|
2,786,134,646
|
pytorch
|
outerNode->outputs().size()
|
### 🐛 Describe the bug
RuntimeError: outerNode->outputs().size() == node->inputs().size() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1729647348947/work/torch/csrc/jit/passes/dead_code_elimination.cpp":138, please report a bug to PyTorch.
Traceback (most recent call last):
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 90, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 124, in _capture
return torch.export.export(
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/__init__.py", line 270, in export
return _export(
^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1224, in _strict_export
return _strict_export_lower_to_aten_ir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1252, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 560, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1432, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 1024, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 774, in call_method
return self.call_apply(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 723, in call_apply
return variables.UserFunctionVariable(fn, source=source).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 727, in call_function
unimplemented(msg)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 297, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin models.curope.curope.PyCapsule.rope_2d. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/croco.py", line 260, in forward
x = blk(x,pos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 129, in forward
x = x + self.drop_path(self.attn(y, xpos))
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 102, in forward
q = self.rope(q, xpos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 39, in forward
cuRoPE2D_func.apply( tokens.transpose(1,2), positions, self.base, self.F0 )
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 20, in forward
_kernels.rope_2d( tokens, positions, base, F0 )
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/xujinqing/t_rt/InstantSplat_old/./coarse_init_infer.py", line 77, in <module>
output = inference(pairs, model, args.device, batch_size=batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/inference.py", line 69, in inference
res = loss_of_one_batch(collate_with_cat(pairs[i:i+batch_size]), model, None, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/inference.py", line 47, in loss_of_one_batch
pred1, pred2 = model(view1, view2)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 271, in forward
(shape1, shape2), (feat1, feat2), (pos1, pos2) = self._encode_symmetrized(view1, view2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 197, in _encode_symmetrized
feat1, feat2, pos1, pos2 = self._encode_image_pairs(img1, img2, shape1, shape2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 178, in _encode_image_pairs
out, pos, _ = self._encode_image(img1, true_shape1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 164, in _encode_image
torch.onnx.export(self.enconder_dust, (torch.rand(1,640,1024).cuda(),pos), './uu.onnx', verbose=False, opset_version=18, enable_onnx_checker=False, do_constant_folding=True,dynamo=True)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/__init__.py", line 345, in export
return exporter.export_compat(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_compat.py", line 161, in export_compat
onnx_program = _core.export(
^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 1057, in export
raise _errors.TorchExportError(
torch.onnx._internal.exporter._errors.TorchExportError: Failed to export the model with torch.export. This is step 1/2 of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and summit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.Unsupported'>: Graph break due to unsupported builtin models.curope.curope.PyCapsule.rope_2d. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/croco.py", line 260, in forward
x = blk(x,pos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 129, in forward
x = x + self.drop_path(self.attn(y, xpos))
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 102, in forward
q = self.rope(q, xpos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 39, in forward
cuRoPE2D_func.apply( tokens.transpose(1,2), positions, self.base, self.F0 )
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 20, in forward
_kernels.rope_2d( tokens, positions, base, F0 )
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
### Versions
torch2trt 0.5.0 pypi_0 pypi
torchtriton 3.1.0 py311 pytorch
torchvision 0.20.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
|
oncall: jit
|
low
|
Critical
|
2,786,175,023
|
yt-dlp
|
`--no-playlist` doesn't work for YouTube embed links
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
The following command, with a YouTube `watch?v=` URL:
```bash
yt-dlp --no-playlist 'https://www.youtube.com/watch?v=zeJD6dqJ5lo&list=PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8'
```
works as expected, downloading only the video, and showing the message `Downloading just the video zeJD6dqJ5lo because of --no-playlist`. But with an embed URL:
```bash
yt-dlp --no-playlist 'https://www.youtube.com/embed/zeJD6dqJ5lo?list=PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8'
```
the whole playlist is downloaded.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-s', '--no-playlist', 'https://www.youtube.com/embed/zeJD6dqJ5lo?list=PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2025.01.12 from yt-dlp/yt-dlp [dade5e35c] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: a3c032182
[debug] Python 3.9.21 (CPython x86_64 64bit) - Linux-6.12.8-arch1-1-x86_64-with-glibc2.40 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: sqlite3-3.47.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.12 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.12 from yt-dlp/yt-dlp)
[youtube:playlist] Extracting URL: https://www.youtube.com/embed/zeJD6dqJ5lo?list=PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8
[youtube:tab] PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8: Downloading webpage
[youtube:tab] PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8: Redownloading playlist API JSON with unavailable videos
[download] Downloading playlist: Central limit theorem
[youtube:tab] PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (1/3)...
[youtube:tab] PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (2/3)...
[youtube:tab] PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Retrying (3/3)...
[youtube:tab] PLZHQObOWTQDOMxJDswBaLu8xBMKxSTvg8 page 1: Downloading API JSON
WARNING: [youtube:tab] Incomplete data received. Giving up after 3 retries
[youtube:tab] Playlist Central limit theorem: Downloading 4 items of 4
[download] Downloading item 1 of 4
[youtube] Extracting URL: https://www.youtube.com/watch?v=zeJD6dqJ5lo
[youtube] zeJD6dqJ5lo: Downloading webpage
[youtube] zeJD6dqJ5lo: Downloading ios player API JSON
[youtube] zeJD6dqJ5lo: Downloading tv player API JSON
[debug] [youtube] zeJD6dqJ5lo: ios client https formats require a PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a PO Token for this client with --extractor-args "youtube:po_token=ios+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/Extractors#po-token-guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig WFTh6zlKa9WyDPsG => CzNKUMel9Ut-qQ
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig xmvh64e_u-zoD1X0 => iO2uQjisVHe5Eg
[youtube] zeJD6dqJ5lo: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] zeJD6dqJ5lo: Downloading 1 format(s): 401+251
[download] Downloading item 2 of 4
[youtube] Extracting URL: https://www.youtube.com/watch?v=cy8r7WSuT1I
[youtube] cy8r7WSuT1I: Downloading webpage
[youtube] cy8r7WSuT1I: Downloading ios player API JSON
[youtube] cy8r7WSuT1I: Downloading tv player API JSON
[debug] [youtube] cy8r7WSuT1I: ios client https formats require a PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a PO Token for this client with --extractor-args "youtube:po_token=ios+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/Extractors#po-token-guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig FZn0CbGXqWhlkCcn => g_Ot3QuVr0mPrw
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig RFhUwgVrgHSrdhCR => jRNYMhgSHoHYtA
[youtube] cy8r7WSuT1I: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] cy8r7WSuT1I: Downloading 1 format(s): 401+251-1
[download] Downloading item 3 of 4
[youtube] Extracting URL: https://www.youtube.com/watch?v=IaSGqQa5O-M
[youtube] IaSGqQa5O-M: Downloading webpage
[youtube] IaSGqQa5O-M: Downloading ios player API JSON
[youtube] IaSGqQa5O-M: Downloading tv player API JSON
[debug] [youtube] IaSGqQa5O-M: ios client https formats require a PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a PO Token for this client with --extractor-args "youtube:po_token=ios+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/Extractors#po-token-guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig EkCCni_0P44fKyKZ => jAHvNr-9_wILHw
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig 3eE3RIKu5Ovxa1DE => YdSmEh2Qg4eJkw
[youtube] IaSGqQa5O-M: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] IaSGqQa5O-M: Downloading 1 format(s): 401+251
[download] Downloading item 4 of 4
[youtube] Extracting URL: https://www.youtube.com/watch?v=d_qvLDhkg00
[youtube] d_qvLDhkg00: Downloading webpage
[youtube] d_qvLDhkg00: Downloading ios player API JSON
[youtube] d_qvLDhkg00: Downloading tv player API JSON
[debug] [youtube] d_qvLDhkg00: ios client https formats require a PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a PO Token for this client with --extractor-args "youtube:po_token=ios+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/Extractors#po-token-guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig Xc6WtuFpYV6mZx6Q => KNNLIcUocRl3vw
[debug] Loading youtube-nsig.0b866fa6 from cache
[debug] [youtube] Decrypted nsig WwXbzskCeIen8oF8 => iys5vHbyKhYejw
[youtube] d_qvLDhkg00: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] d_qvLDhkg00: Downloading 1 format(s): 401+251
[download] Finished downloading playlist: Central limit theorem
```
|
site-bug,triage,site:youtube
|
low
|
Critical
|
2,786,178,498
|
pytorch
|
[inductor] [cuda] [fake tensor] `torch.nextafter` loose the check for different device tensor on inductor
|
### 🐛 Describe the bug
Actually, I am not sure whether it is the eager issue or inductor?
Because from my personal understanding, I think eager should pass the check like `torch.add` (`x = torch.nextafter(x, torch.tensor(1.0))` can pass the check)
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.enable_grad(False)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = torch.nextafter(x, torch.tensor(1.0))
return x
model = Model().cuda()
x = torch.randn(1).cuda()
inputs = [x]
try:
output = model(*inputs)
except Exception as e:
print("fails on eager")
print(e)
try:
model = torch.compile(model)
output = model(*inputs)
except Exception as e:
print("fails on inductor")
print(e)
```
log
```
fails on eager
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument other in method wrapper_CUDA_nextafter)
```
### Versions
PyTorch version: 2.7.0.dev20250112+cu124
GPU: Tesla V100-SXM2-32GB
<details>
<summary>click here for detailed env</summary>
```
PyTorch version: 2.7.0.dev20250112+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 550.142
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250112+cu124
[pip3] torchaudio==2.6.0.dev20250112+cu124
[pip3] torchvision==0.22.0.dev20250112+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250112+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250112+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250112+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh
|
triaged,oncall: pt2,module: fakeTensor,module: inductor,module: pt2-dispatcher
|
low
|
Critical
|
2,786,183,685
|
transformers
|
past_key_values cat out of model generate, output appear disorder
|
### System Info
- `transformers` version: 4.47.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA 4090-24G
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I use past_key_values for generate, I have a problem. I want to cat prompt kv cache with current chat kv cache for limit gpu mem, but its fasle, this can be output, but not a right order sentence.
code be like:
#prompt_kv_cache can be used
past_key_values = None
output = model.generate(input_ids, past_key_values=past_key_values)
past_key_values = output.past_key_values
...some chat count
last_past = []
for layer_past in past_key_values:
past_keys, past_values = layer_past
trimmed_past_keys = past_keys[:, :, -len_kv:, :]
trimmed_past_values = past_values[:, :, -len_kv:, :]
last_past.append((trimmed_past_keys, trimmed_past_values))
_past_all = []
for p1_layer, p2_layer in zip(prompt_kv_cache, tuple(last_past)): #prompt_kv_cache can be used for init, this true
_past_all.append(tuple(torch.cat((p1, p2), dim=-2) for p1, p2 in zip(p1_layer, p2_layer)))
output = model.generate(input_ids, past_key_values=tuple(_past_all))
Now, output.text is no-order.
For example, last input is "i want to eat"
output text maybe generate like "you \n\n need need some some some \n\n food food\n" or "you \n\n food \n need need some some some \n\n food food\n"
what wrong with past_key_values cat?
### Expected behavior
I want to know, how to cat a early kv cache with current kv cache
|
bug,Generation
|
low
|
Minor
|
2,786,184,943
|
rust
|
Reference of literal is not optimized out
|
https://godbolt.org/z/W1Gbnfjz9
Consider code like this, the codegen includes unnecessary reference of literal :
```rust
const T: [u8; 512] = ...;
pub fn f(a: usize) -> u8 {
*T.get(a).unwrap_or(&1)
}
```
|
T-compiler,C-optimization
|
low
|
Minor
|
2,786,192,233
|
rust
|
ICE: invalid `CoerceUnsized` impl_source: Err(FulfillmentError)
|
<!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
If you need me to try to remove more of the dependencies please let me know but I got tired while producing the MRE. I managed to remove all the macros and shuttle dependencies but reached a local minimum and would have try to start unwrapping tonic next. If it's needed let me know but wasn't sure if it was worth the effort. The code can also be found in [this repo](https://github.com/c-git/ice_mre_invalid_CoerceUnsized). Because I didn't finish removing all the dependencies I've also included the `Cargo.toml`
### Cargo.toml
```toml
[package]
name = "ice_mre_invalid_coerce_unsized"
version = "0.1.0"
edition = "2021"
[dependencies]
async-trait = "0.1.85"
http = "1.2"
opendal = "0.51"
serde = "1.0.217"
serde_json = "1.0.135"
tokio = { version = "1.43", default-features = false, features = ["macros", "rt-multi-thread"] }
tonic = "0.12.3"
tower = "0.5.2"
```
### Code
```Rust
use async_trait::async_trait;
use serde::Deserialize;
use std::future::Future;
use tonic::{server::NamedService, transport::Server};
#[tokio::main]
async fn main() {
let svc = RuntimeServer::new(runner);
Server::builder().add_service(svc);
}
async fn runner() {
// Doesn't happen if I trivially replace either of these lines like with a todo!() instead of the assigned value
let x: Wrapper = serde_json::from_slice("".as_bytes()).unwrap();
let operator: opendal::Operator = x.inner.unwrap();
// Didn't test
// operator.delete_stream().await.unwrap();
// operator.delete_try_stream().await.unwrap();
// Only happens with any one of the follow async but not the other functions (barring the two mentioned as not tested above)
operator.check().await.unwrap();
operator.write("", vec![0; 2]).await.unwrap();
operator.writer("").await.unwrap();
operator.write_with("", "").await.unwrap();
operator
.delete_try_iter([Ok("")].into_iter())
.await
.unwrap();
operator.deleter().await.unwrap();
operator.list("").await.unwrap();
operator.remove_all("").await.unwrap();
operator.list_with("").await.unwrap();
operator.lister("").await.unwrap();
operator.lister_with("").await.unwrap();
}
pub struct RuntimeServer<T: 'static + Clone> {
runner: T,
}
#[derive(Default, Deserialize)]
struct Wrapper {
#[serde(skip)]
inner: Option<opendal::Operator>,
}
pub type BoxFuture<T, E> =
std::pin::Pin<Box<dyn self::Future<Output = Result<T, E>> + Send + 'static>>;
#[async_trait]
pub trait Runner: Send + Sync + Clone {}
#[async_trait]
impl<T> Runner for RuntimeServer<T> where T: Runner + Send + 'static {}
#[async_trait]
impl<F, O> Runner for F
where
F: FnOnce() -> O + Send + Sync + Clone,
O: Future<Output = ()> + Send,
{
}
impl<T: Runner> Clone for RuntimeServer<T> {
fn clone(&self) -> Self {
todo!()
}
}
impl<T: Runner> NamedService for RuntimeServer<T> {
const NAME: &'static str = "";
}
impl<T: Runner> RuntimeServer<T> {
pub fn new(_inner: T) -> Self {
todo!()
}
}
impl<T, B> tower::Service<http::Request<B>> for RuntimeServer<T>
where
T: Runner,
B: tonic::codegen::Body + Send + 'static,
B::Error: std::error::Error + Sync + Send + 'static,
{
type Response = http::Response<tonic::body::BoxBody>;
type Error = std::convert::Infallible;
type Future = BoxFuture<Self::Response, Self::Error>;
fn poll_ready(
&mut self,
_cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), Self::Error>> {
todo!()
}
fn call(&mut self, req: http::Request<B>) -> Self::Future {
match req.uri().path() {
"" => {
#[allow(non_camel_case_types, dead_code)]
struct LoadSvc<T: Runner>(pub T);
impl<T: Runner> tonic::server::UnaryService<()> for LoadSvc<T> {
type Response = ();
type Future = BoxFuture<tonic::Response<Self::Response>, tonic::Status>;
fn call(&mut self, _request: tonic::Request<()>) -> Self::Future {
todo!()
}
}
let inner = self.runner.clone();
let fut = async move {
let method = LoadSvc(inner);
let codec = tonic::codec::ProstCodec::default();
let mut grpc = tonic::server::Grpc::new(codec);
let res = grpc.unary(method, req).await;
Ok(res)
};
Box::pin(fut)
}
_ => todo!(),
}
}
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
### Error output
```
Compiling ice_mre_invalid_coerce_unsized v0.1.0 (/home/one/ice_mre_invalid_CoerceUnsized)
error: internal compiler error: compiler/rustc_monomorphize/src/lib.rs:46:13: invalid `CoerceUnsized` impl_source: Err(FulfillmentError)
thread 'rustc' panicked at compiler/rustc_monomorphize/src/lib.rs:46:13:
Box<dyn Any>
stack backtrace:
0: 0x75582558682a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::ha4a311b32f6b4ad8
1: 0x755825c277e2 - core::fmt::write::h1866771663f62b81
2: 0x755826a81d51 - std::io::Write::write_fmt::hb549e7444823135e
3: 0x755825586682 - std::sys::backtrace::BacktraceLock::print::hddd3a9918ce29aa7
4: 0x755825588b5a - std::panicking::default_hook::{{closure}}::h791f75256b902d7d
5: 0x7558255889c0 - std::panicking::default_hook::h82cc572fcb0d8cd7
6: 0x755824614f55 - std[1b49f43dde054edc]::panicking::update_hook::<alloc[f0e0d4128a1437e6]::boxed::Box<rustc_driver_impl[c421ed190efad9be]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x755825589238 - std::panicking::rust_panic_with_hook::he21644cc2707f2c4
8: 0x75582464fa01 - std[1b49f43dde054edc]::panicking::begin_panic::<rustc_errors[fd0d1ab268a7514d]::ExplicitBug>::{closure#0}
9: 0x7558246429c6 - std[1b49f43dde054edc]::sys::backtrace::__rust_end_short_backtrace::<std[1b49f43dde054edc]::panicking::begin_panic<rustc_errors[fd0d1ab268a7514d]::ExplicitBug>::{closure#0}, !>
10: 0x75582463df99 - std[1b49f43dde054edc]::panicking::begin_panic::<rustc_errors[fd0d1ab268a7514d]::ExplicitBug>
11: 0x755824659971 - <rustc_errors[fd0d1ab268a7514d]::diagnostic::BugAbort as rustc_errors[fd0d1ab268a7514d]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x755824cc8913 - rustc_middle[60437f3b60b3af56]::util::bug::opt_span_bug_fmt::<rustc_span[200b27ea0e9a3b9b]::span_encoding::Span>::{closure#0}
13: 0x755824cafe8a - rustc_middle[60437f3b60b3af56]::ty::context::tls::with_opt::<rustc_middle[60437f3b60b3af56]::util::bug::opt_span_bug_fmt<rustc_span[200b27ea0e9a3b9b]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x755824cafd1b - rustc_middle[60437f3b60b3af56]::ty::context::tls::with_context_opt::<rustc_middle[60437f3b60b3af56]::ty::context::tls::with_opt<rustc_middle[60437f3b60b3af56]::util::bug::opt_span_bug_fmt<rustc_span[200b27ea0e9a3b9b]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7558231e4110 - rustc_middle[60437f3b60b3af56]::util::bug::bug_fmt
16: 0x755827655c2d - rustc_monomorphize[64293748b2428815]::collector::find_vtable_types_for_unsizing.cold
17: 0x755823466669 - rustc_monomorphize[64293748b2428815]::collector::items_of_instance
18: 0x7558262d92b2 - rustc_query_impl[d10191050d412fc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d10191050d412fc]::query_impl::items_of_instance::dynamic_query::{closure#2}::{closure#0}, rustc_middle[60437f3b60b3af56]::query::erase::Erased<[u8; 32usize]>>
19: 0x7558262d73b4 - rustc_query_system[c1574a252f7419c]::query::plumbing::try_execute_query::<rustc_query_impl[d10191050d412fc]::DynamicConfig<rustc_query_system[c1574a252f7419c]::query::caches::DefaultCache<(rustc_middle[60437f3b60b3af56]::ty::instance::Instance, rustc_middle[60437f3b60b3af56]::mir::mono::CollectionMode), rustc_middle[60437f3b60b3af56]::query::erase::Erased<[u8; 32usize]>>, false, false, false>, rustc_query_impl[d10191050d412fc]::plumbing::QueryCtxt, true>
20: 0x7558262d5d18 - rustc_query_impl[d10191050d412fc]::query_impl::items_of_instance::get_query_incr::__rust_end_short_backtrace
21: 0x7558262d3572 - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec::{closure#0}
22: 0x755826e33048 - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
23: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
24: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
25: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
26: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
27: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
28: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
29: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
30: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
31: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
32: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
33: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
34: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
35: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
36: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
37: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
38: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
39: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
40: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
41: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
42: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
43: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
44: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
45: 0x755826e3399e - rustc_monomorphize[64293748b2428815]::collector::collect_items_rec
46: 0x755826648de9 - rustc_monomorphize[64293748b2428815]::partitioning::collect_and_partition_mono_items
47: 0x755826648416 - rustc_query_impl[d10191050d412fc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d10191050d412fc]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[60437f3b60b3af56]::query::erase::Erased<[u8; 24usize]>>
48: 0x7558266483e3 - <rustc_query_impl[d10191050d412fc]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[9e3ec3a99e20741e]::ops::function::FnOnce<(rustc_middle[60437f3b60b3af56]::ty::context::TyCtxt, ())>>::call_once
49: 0x755826b255c0 - rustc_query_system[c1574a252f7419c]::query::plumbing::try_execute_query::<rustc_query_impl[d10191050d412fc]::DynamicConfig<rustc_query_system[c1574a252f7419c]::query::caches::SingleCache<rustc_middle[60437f3b60b3af56]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[d10191050d412fc]::plumbing::QueryCtxt, true>
50: 0x755826b24f09 - rustc_query_impl[d10191050d412fc]::query_impl::collect_and_partition_mono_items::get_query_incr::__rust_end_short_backtrace
51: 0x755826b7c547 - <rustc_codegen_llvm[87a67cd1a6f247bf]::LlvmCodegenBackend as rustc_codegen_ssa[47ed54211a626f01]::traits::backend::CodegenBackend>::codegen_crate
52: 0x755826b8e327 - <rustc_interface[aa3cb6198a62650b]::queries::Linker>::codegen_and_build_linker
53: 0x755826a652c8 - rustc_interface[aa3cb6198a62650b]::interface::run_compiler::<core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>, rustc_driver_impl[c421ed190efad9be]::run_compiler::{closure#0}>::{closure#1}
54: 0x755826aa01e0 - std[1b49f43dde054edc]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[aa3cb6198a62650b]::util::run_in_thread_with_globals<rustc_interface[aa3cb6198a62650b]::util::run_in_thread_pool_with_globals<rustc_interface[aa3cb6198a62650b]::interface::run_compiler<core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>, rustc_driver_impl[c421ed190efad9be]::run_compiler::{closure#0}>::{closure#1}, core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>>::{closure#0}, core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>>
55: 0x755826a9fefd - <<std[1b49f43dde054edc]::thread::Builder>::spawn_unchecked_<rustc_interface[aa3cb6198a62650b]::util::run_in_thread_with_globals<rustc_interface[aa3cb6198a62650b]::util::run_in_thread_pool_with_globals<rustc_interface[aa3cb6198a62650b]::interface::run_compiler<core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>, rustc_driver_impl[c421ed190efad9be]::run_compiler::{closure#0}>::{closure#1}, core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>>::{closure#0}, core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[9e3ec3a99e20741e]::result::Result<(), rustc_span[200b27ea0e9a3b9b]::ErrorGuaranteed>>::{closure#1} as core[9e3ec3a99e20741e]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
56: 0x755826a9f6b9 - std::sys::pal::unix::thread::Thread::new::thread_start::h14f1eb868ff90fc9
57: 0x755820c94ac3 - start_thread
at ./nptl/pthread_create.c:442:8
58: 0x755820d26850 - __GI___clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
59: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.84.0 (9fc6b4312 2025-01-07) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [items_of_instance] collecting items used by `<impl at src/main.rs:80:1: 84:57>::call`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
error: could not compile `ice_mre_invalid_coerce_unsized` (bin "ice_mre_invalid_coerce_unsized")
Caused by:
process didn't exit successfully: `/home/one/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name ice_mre_invalid_coerce_unsized --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=bc9219222c47fdd2 -C extra-filename=-bc9219222c47fdd2 --out-dir /home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps -C incremental=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/incremental -L dependency=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps --extern async_trait=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libasync_trait-2e38ce1bf36df4a8.so --extern http=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libhttp-3a8f7e5e71542b5c.rlib --extern opendal=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libopendal-c01db150d8f9c96b.rlib --extern serde=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libserde-0a7940d4cb86f183.rlib --extern serde_json=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libserde_json-e7d76399483ec258.rlib --extern tokio=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libtokio-8b2b5462a056c1d5.rlib --extern tonic=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libtonic-ab3d5e4d75427b48.rlib --extern tower=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libtower-0e519a02860ed7e5.rlib -L native=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/build/ring-c42c0ae835c8384f/out` (exit status: 101)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
Compiling ice_mre_invalid_coerce_unsized v0.1.0 (/home/one/ice_mre_invalid_CoerceUnsized)
error: internal compiler error: compiler/rustc_monomorphize/src/lib.rs:46:13: invalid `CoerceUnsized` impl_source: Err(FulfillmentError)
thread 'rustc' panicked at compiler/rustc_monomorphize/src/lib.rs:46:13:
Box<dyn Any>
stack backtrace:
0: std::panicking::begin_panic::<rustc_errors::ExplicitBug>
1: <rustc_errors::diagnostic::BugAbort as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
2: rustc_middle::util::bug::opt_span_bug_fmt::<rustc_span::span_encoding::Span>::{closure#0}
3: rustc_middle::ty::context::tls::with_opt::<rustc_middle::util::bug::opt_span_bug_fmt<rustc_span::span_encoding::Span>::{closure#0}, !>::{closure#0}
4: rustc_middle::ty::context::tls::with_context_opt::<rustc_middle::ty::context::tls::with_opt<rustc_middle::util::bug::opt_span_bug_fmt<rustc_span::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
5: rustc_middle::util::bug::bug_fmt
6: rustc_monomorphize::collector::find_vtable_types_for_unsizing.cold
7: rustc_monomorphize::collector::items_of_instance
[... omitted 1 frame ...]
8: rustc_monomorphize::collector::collect_items_rec::{closure#0}
9: rustc_monomorphize::collector::collect_items_rec
10: rustc_monomorphize::collector::collect_items_rec
11: rustc_monomorphize::collector::collect_items_rec
12: rustc_monomorphize::collector::collect_items_rec
13: rustc_monomorphize::collector::collect_items_rec
14: rustc_monomorphize::collector::collect_items_rec
15: rustc_monomorphize::collector::collect_items_rec
16: rustc_monomorphize::collector::collect_items_rec
17: rustc_monomorphize::collector::collect_items_rec
18: rustc_monomorphize::collector::collect_items_rec
19: rustc_monomorphize::collector::collect_items_rec
20: rustc_monomorphize::collector::collect_items_rec
21: rustc_monomorphize::collector::collect_items_rec
22: rustc_monomorphize::collector::collect_items_rec
23: rustc_monomorphize::collector::collect_items_rec
24: rustc_monomorphize::collector::collect_items_rec
25: rustc_monomorphize::collector::collect_items_rec
26: rustc_monomorphize::collector::collect_items_rec
27: rustc_monomorphize::collector::collect_items_rec
28: rustc_monomorphize::collector::collect_items_rec
29: rustc_monomorphize::collector::collect_items_rec
30: rustc_monomorphize::collector::collect_items_rec
31: rustc_monomorphize::collector::collect_items_rec
32: rustc_monomorphize::collector::collect_items_rec
33: rustc_monomorphize::partitioning::collect_and_partition_mono_items
[... omitted 2 frames ...]
34: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::codegen_crate
35: <rustc_interface::queries::Linker>::codegen_and_build_linker
36: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.84.0 (9fc6b4312 2025-01-07) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [items_of_instance] collecting items used by `<impl at src/main.rs:80:1: 84:57>::call`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
error: could not compile `ice_mre_invalid_coerce_unsized` (bin "ice_mre_invalid_coerce_unsized")
Caused by:
process didn't exit successfully: `/home/one/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name ice_mre_invalid_coerce_unsized --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=bc9219222c47fdd2 -C extra-filename=-bc9219222c47fdd2 --out-dir /home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps -C incremental=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/incremental -L dependency=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps --extern async_trait=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libasync_trait-2e38ce1bf36df4a8.so --extern http=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libhttp-3a8f7e5e71542b5c.rlib --extern opendal=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libopendal-c01db150d8f9c96b.rlib --extern serde=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libserde-0a7940d4cb86f183.rlib --extern serde_json=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libserde_json-e7d76399483ec258.rlib --extern tokio=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libtokio-8b2b5462a056c1d5.rlib --extern tonic=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libtonic-ab3d5e4d75427b48.rlib --extern tower=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/deps/libtower-0e519a02860ed7e5.rlib -L native=/home/one/ice_mre_invalid_CoerceUnsized/target/debug/build/ring-c42c0ae835c8384f/out` (exit status: 101)
```
</p>
</details>
|
I-ICE,T-compiler,C-bug,S-has-mcve,A-monomorphization,S-has-bisection
|
low
|
Critical
|
2,786,205,856
|
langchain
|
ChatOpenAI with structured output, outputs always all field of my pydantic object.
|
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
input_messages = [
SystemMessage(content='You are a helpful assistant please output valid json.'),
HumanMessage(content='What is the weather in Tokyo?'),
]
structured_llm = llm.with_structured_output(
output_class, include_raw=True
)
response: dict[str, Any] = await structured_llm.ainvoke(input_messages) # type: ignore
```
Our output_class pydantic object decoded with
```python
output_class.model_json_schema()
```
Looks like
```json
"$defs": {"ActionModel": {"properties": {"search_google": {"anyOf": [
{"$ref": "#/$defs/SearchGoogleAction"
},
{"type": "null"
}
], "default": None
}, "go_to_url": {"anyOf": [
{"$ref": "#/$defs/GoToUrlAction"
},
{"type": "null"
}
], "default": None
}, "go_back": {"anyOf": [
{"$ref": "#/$defs/go_backParams"
},
{"type": "null"
}
], "default": None
}, "click_element": {"anyOf": [
{"$ref": "#/$defs/ClickElementAction"
},
{"type": "null"
}
], "default": None
}, "input_text": {"anyOf": [
{"$ref": "#/$defs/InputTextAction"
},
{"type": "null"
}
], "default": None
}, "switch_tab": {"anyOf": [
{"$ref": "#/$defs/SwitchTabAction"
},
{"type": "null"
}
], "default": None
}, "open_tab": {"anyOf": [
{"$ref": "#/$defs/OpenTabAction"
},
{"type": "null"
}
], "default": None
}, "extract_content": {"anyOf": [
{"$ref": "#/$defs/ExtractPageContentAction"
},
{"type": "null"
}
], "default": None
}, "done": {"anyOf": [
{"$ref": "#/$defs/DoneAction"
},
{"type": "null"
}
], "default": None
}, "scroll_down": {"anyOf": [
{"$ref": "#/$defs/ScrollAction"
},
{"type": "null"
}
], "default": None
}, "scroll_up": {"anyOf": [
{"$ref": "#/$defs/ScrollAction"
},
{"type": "null"
}
], "default": None
}, "send_keys": {"anyOf": [
{"$ref": "#/$defs/SendKeysAction"
},
{"type": "null"
}
], "default": None
}, "scroll_to_text": {"anyOf": [
{"$ref": "#/$defs/scroll_to_textParams"
},
{"type": "null"
}
], "default": None
}, "get_dropdown_options": {"anyOf": [
{"$ref": "#/$defs/get_dropdown_optionsParams"
},
{"type": "null"
}
], "default": None
}, "select_dropdown_option": {"anyOf": [
{"$ref": "#/$defs/select_dropdown_optionParams"
},
{"type": "null"
}
], "default": None
}
}, "title": "ActionModel", "type": "object"
}, "AgentBrain": {"description": "Current state of the agent", "properties": {"evaluation_previous_goal": {"title": "Evaluation Previous Goal", "type": "string"
}, "memory": {"title": "Memory", "type": "string"
}, "next_goal": {"title": "Next Goal", "type": "string"
}
}, "required": ["evaluation_previous_goal", "memory", "next_goal"
], "title": "AgentBrain", "type": "object"
}, "ClickElementAction": {"properties": {"index": {"title": "Index", "type": "integer"
}, "xpath": {"anyOf": [
{"type": "string"
},
{"type": "null"
}
], "default": None, "title": "Xpath"
}
}, "required": ["index"
], "title": "ClickElementAction", "type": "object"
}, "DoneAction": {"properties": {"text": {"title": "Text", "type": "string"
}
}, "required": ["text"
], "title": "DoneAction", "type": "object"
}, "ExtractPageContentAction": {"properties": {"include_links": {"title": "Include Links", "type": "boolean"
}
}, "required": ["include_links"
], "title": "ExtractPageContentAction", "type": "object"
}, "GoToUrlAction": {"properties": {"url": {"title": "Url", "type": "string"
}
}, "required": ["url"
], "title": "GoToUrlAction", "type": "object"
}, "InputTextAction": {"properties": {"index": {"title": "Index", "type": "integer"
}, "text": {"title": "Text", "type": "string"
}, "xpath": {"anyOf": [
{"type": "string"
},
{"type": "null"
}
], "default": None, "title": "Xpath"
}
}, "required": ["index", "text"
], "title": "InputTextAction", "type": "object"
}, "OpenTabAction": {"properties": {"url": {"title": "Url", "type": "string"
}
}, "required": ["url"
], "title": "OpenTabAction", "type": "object"
}, "ScrollAction": {"properties": {"amount": {"anyOf": [
{"type": "integer"
},
{"type": "null"
}
], "default": None, "title": "Amount"
}
}, "title": "ScrollAction", "type": "object"
}, "SearchGoogleAction": {"properties": {"query": {"title": "Query", "type": "string"
}
}, "required": ["query"
], "title": "SearchGoogleAction", "type": "object"
}, "SendKeysAction": {"properties": {"keys": {"title": "Keys", "type": "string"
}
}, "required": ["keys"
], "title": "SendKeysAction", "type": "object"
}, "SwitchTabAction": {"properties": {"page_id": {"title": "Page Id", "type": "integer"
}
}, "required": ["page_id"
], "title": "SwitchTabAction", "type": "object"
}, "get_dropdown_optionsParams": {"properties": {"index": {"title": "Index", "type": "integer"
}
}, "required": ["index"
], "title": "get_dropdown_optionsParams", "type": "object"
}, "go_backParams": {"properties": {}, "title": "go_backParams", "type": "object"
}, "scroll_to_textParams": {"properties": {"text": {"title": "Text", "type": "string"
}
}, "required": ["text"
], "title": "scroll_to_textParams", "type": "object"
}, "select_dropdown_optionParams": {"properties": {"index": {"title": "Index", "type": "integer"
}, "text": {"title": "Text", "type": "string"
}
}, "required": ["index", "text"
], "title": "select_dropdown_optionParams", "type": "object"
}
}, "properties": {"current_state": {"$ref": "#/$defs/AgentBrain"
}, "action": {"items": {"$ref": "#/$defs/ActionModel"
}, "title": "Action", "type": "array"
}
}, "required": ["current_state", "action"
], "title": "AgentOutput", "type": "object"
}
```
### Error Message and Stack Trace (if applicable)
Output:
```json
{
"current_state": {
"evaluation_previous_goal": "Determine the current weather in Tokyo.",
"memory": "The user is interested in the current weather conditions in Tokyo.",
"next_goal": "Search for the current weather in Tokyo using an online weather service or search engine."
},
"action": [
{
"search_google": {
"query": "current weather in Tokyo"
},
"go_to_url": null,
"go_back": null,
"click_element": null,
"input_text": null,
"switch_tab": null,
"open_tab": null,
"extract_content": null,
"done": null,
"scroll_down": null,
"scroll_up": null,
"send_keys": null,
"scroll_to_text": null,
"get_dropdown_options": null,
"select_dropdown_option": null
}
]
}
```
Expected output:
```json
{
"current_state": {
"evaluation_previous_goal": "Determine the current weather in Tokyo.",
"memory": "The user is interested in the current weather conditions in Tokyo.",
"next_goal": "Search for the current weather in Tokyo using an online weather service or search engine."
},
"action": [
{
"search_google": {
"query": "current weather in Tokyo"
}
}
]
}
```
### Description
Hey, I want to upgrade our library browser-use to your new 3.0.0 version.
But with ChatOpenAI gpt-4 outputs all fields for the action key. But we only want to have one action of the keys, e.g. only "search_google".
This save many tokens and also prevents the model from outputting multiple actions in the same element.
In version 2.24 it worked and gave us our desired output.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:33:00 PDT 2023; root:xnu-10002.41.9~7/RELEASE_ARM64_T6031
> Python Version: 3.11.10 (main, Oct 16 2024, 08:56:36) [Clang 18.1.8 ]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_anthropic: 0.3.1
> langchain_aws: 0.2.10
> langchain_fireworks: 0.2.5
> langchain_google: 0.1.1
> langchain_google_genai: 2.0.8
> langchain_mistralai: 0.2.4
> langchain_ollama: 0.2.2
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.4
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> async-timeout: 5.0.1
> boto3: 1.35.97
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> filetype: 1.2.0
> fireworks-ai: 0.15.8
> google-generativeai: 0.8.3
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> ollama: 0.4.5
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.21.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
(.venv) magnus@full-speed agentweb %
|
🤖:bug,investigate,Ɑ: core
|
low
|
Critical
|
2,786,208,156
|
flutter
|
Selection handle does not use `CupertinoThemeData.primaryColor` from the nearest theme on iOS
|
### Steps to reproduce
1. Open the app on iOS
2. Enter text into a TextField
3. Double tap to select the text
### Expected results
The selection handles should be blue/`CupertinoThemeData.primaryColor` as stated in the [docs](https://api.flutter.dev/flutter/material/TextSelectionThemeData/selectionHandleColor.html). This seems to have regressed in Flutter 3.27.
### Actual results
The selection handle color remains purple, ignoring `CupertinoThemeData.primaryColor`.

### Code sample
<details open><summary>Code sample</summary>
```dart
void main() => runApp(Foo());
class Foo extends StatelessWidget {
const Foo({super.key});
@override
Widget build(BuildContext context) => MaterialApp(
home: Scaffold(
body: Material(
color: Colors.transparent,
child: Theme(
data: Theme.of(context).copyWith(
textSelectionTheme: TextSelectionThemeData(
cursorColor: Colors.blue,
selectionColor: Colors.blue.withValues(alpha: 0.4),
selectionHandleColor: Colors.blue,
),
cupertinoOverrideTheme: const CupertinoThemeData(
primaryColor: Colors.blue,
),
),
child: const Center(child: TextField()),
),
),
),
);
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-SG)
• Flutter version 3.27.1 on channel stable at /Users/matthias/Documents/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/matthias/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Users/matthias/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Users/matthias/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3.1.1)
• IntelliJ at /Users/matthias/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 83.0.4
• Dart plugin version 243.23177
[✓] Connected device (4 available)
• Overpriced Phone (mobile) • 00008030-0009195E1E41802E • ios • iOS 18.1.1 22B91
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
a: text input,platform-ios,framework,f: material design,f: cupertino,has reproducible steps,P2,a: adaptivity,workaround available,team-design,triaged-design,found in release: 3.27,found in release: 3.28
|
low
|
Major
|
2,786,223,801
|
flutter
|
Ios18.2 freezes when selecting pictures
|
### Steps to reproduce
Ios18.2 freezes when selecting pictures
### Expected results
Ios18.2 freezes when selecting pictures
### Actual results
Ios18.2 freezes when selecting pictures
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
|
waiting for customer response,in triage
|
low
|
Minor
|
2,786,249,765
|
vscode
|
Edit context adds a tabstop between chat input and buttons
|
- Focus the chat input
- press tab
- Focus goes to `.native-edit-context-textarea`
- Press tab again
- Now it's where I expect, on the next toolbar
|
bug,editor-edit-context
|
low
|
Minor
|
2,786,257,669
|
ollama
|
[Feature] Support Intel GPUs
|
Ollama had supported by the PR https://github.com/ollama/ollama/pull/2458 merged to support Intel GPU.
But the function disappears now.
I see there are several issues and opened PRs for Intel GPU. But they are too old.
I want to draft PRs to support Intel GPU: dGPU & iGPU (since 11th Core) by including llama.cpp SYCL backend.
This issue is created to trace the development work and reduce the duplicated work in the future.
|
feature request
|
low
|
Minor
|
2,786,292,464
|
PowerToys
|
The pop-up fast exception detection fails and the exception stalls
|
### Microsoft PowerToys version
0.87.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Installer
### Steps to reproduce
If my system has the latest version installed , the pop-up fast exception detection fails and the exception stalls,but if i use version 0.86 it is normal,if you want a image to debugger,please post email to 2298192210@qq.com,then i will send the picture to you by email
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,786,294,290
|
flutter
|
[IOS] Inconsistent Behavior of Password Save Prompt in Flutter Autofill using `TextInput.finishAutofillContext(shouldSave: true)`
|
## Description
The issue consists of two distinct aspects:
1. The problem related to `onDisposeAction: AutofillContextAction.cancel` appears to be resolved by the PR [#160653](https://github.com/flutter/flutter/pull/160653).
2. However, another issue remains unresolved: When `TextInput.finishAutofillContext(shouldSave: true)` is called for the first time, the password save prompt appears as expected. But subsequent calls to `TextInput.finishAutofillContext(shouldSave: true)` do not trigger the popup again unless the text fields are cleared, re-filled with new input, and then the method is called again.
### Expected Behavior
- The password save prompt should appear **every time** `TextInput.finishAutofillContext(shouldSave: true)` is invoked, regardless of prior invocations or text field states.
### Actual Behavior
- The password save prompt appears only the first time `TextInput.finishAutofillContext(shouldSave: true)` is invoked.
- Further invocations do not display the popup unless the text fields are cleared and re-filled.
## Steps to Reproduce
1. Implement an `AutofillGroup` with the following setup:
```dart
AutofillGroup(
onDisposeAction: AutofillContextAction.cancel,
child: Column(
children: [
TextField(
autofillHints: [AutofillHints.username],
),
TextField(
autofillHints: [AutofillHints.password],
obscureText: true,
),
],
),
);
```
2. Call `TextInput.finishAutofillContext(shouldSave: true);`. Observe that the password save prompt appears.
3. Call `TextInput.finishAutofillContext(shouldSave: true);` again without clearing or re-filling the text fields. Observe that the password save prompt does not appear.
4. Clear the text fields, enter new credentials, and call `TextInput.finishAutofillContext(shouldSave: true);` again. Observe that the password save prompt appears.
## Code Example
Here’s a simplified code snippet to reproduce the issue:
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: AutofillExample(),
);
}
}
class AutofillExample extends StatelessWidget {
final TextEditingController usernameController = TextEditingController();
final TextEditingController passwordController = TextEditingController();
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('Autofill Example')),
body: AutofillGroup(
onDisposeAction: AutofillContextAction.cancel,
child: Padding(
padding: const EdgeInsets.all(16.0),
child: Column(
children: [
TextField(
controller: usernameController,
autofillHints: [AutofillHints.username],
decoration: InputDecoration(labelText: 'Username'),
),
TextField(
controller: passwordController,
autofillHints: [AutofillHints.password],
obscureText: true,
decoration: InputDecoration(labelText: 'Password'),
),
ElevatedButton(
onPressed: () {
TextInput.finishAutofillContext(shouldSave: true);
},
child: Text('Submit'),
),
],
),
),
),
);
}
}
```
## Environment
- Flutter version: [3.24.5]
- Platform: [iOS]
## Additional Notes
The PR [#160653](https://github.com/flutter/flutter/pull/160653) addresses the issue with `onDisposeAction: AutofillContextAction.cancel`, but the behavior of `TextInput.finishAutofillContext(shouldSave: true)` as described above remains unresolved. This issue requires attention to ensure consistent behavior of the password save prompt.
|
a: text input,platform-ios,framework,has reproducible steps,P2,team-ios,triaged-ios,fyi-text-input,found in release: 3.27,found in release: 3.28
|
low
|
Minor
|
2,786,302,118
|
PowerToys
|
Add Window Snapping with Temporary Screen Boundaries in 'Mouse Without Borders'
|
### Description of the new feature / enhancement
Currently, 'Mouse Without Borders' allows seamless cursor movement across multiple computers, but dragging windows lacks the ability to snap them to specific areas like halves or quarters of the screen.
The proposed feature would temporarily enable screen boundaries when a window is being dragged, allowing it to snap to predefined areas. This improvement would make managing and arranging windows across multiple monitors more convenient and efficient.
### Scenario when this would be used?
- When using multiple computers connected with 'Mouse Without Borders.'
- Dragging windows between monitors.
- Arranging windows to snap to specific areas, such as halves or quarters of the screen.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,786,316,705
|
kubernetes
|
Long Lived TCP Connections Fail When Downscaling Kube Proxy (ExternalTrafficPolicy Cluster)
|
### What happened?
Some external cloud providers such as Azure use a pass-through(direct server return) load balancer. This means that TCP connections are not terminated on the loadbalancer, but instead downstream in the kubernetes cluster.
ExternalTrafficPolicy Cluster configures load balancers to send traffic to any node in the cluster, even if the node is not running a pod that the traffic is destined to. From there kube-proxy routes the request to the pod (which could either be on the same or different node).
In our case, when client sets up a long living TCP connection and that traffic path hits kube-proxy on a node that downscales sometime during the duration of the TCP connection, we observe 520 errors from the server.
### What did you expect to happen?
Kubernetes nodes that act as a intermediate hop between the loadbalancer and the node/pod that the traffic is destined to should gracefully allow all existing TCP connections to finish before the node/processes are terminated.
This is important because some loadbalancers like Azure continue to send traffic to existing TCP even though the target node has been marked unheathly.
### How can we reproduce it (as minimally and precisely as possible)?
1) Setup any service (e.g., nginx ingress), tolerated/tainted to node group A. Use loadbalancer with external traffic policy: cluster.
2) Scale node pool B.
3) Simulate many http requests, targetting loadbalancer endpoint of service created in step 1
4) Scale down node pool B.
5) Observe errors on requests that proxied through scaled down node in node pool B.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
1.30
</details>
### Cloud provider
<details>
Azure
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
kind/bug,sig/network,sig/autoscaling,sig/cloud-provider,needs-triage
|
low
|
Critical
|
2,786,362,549
|
material-ui
|
[docs] Documentation error: integration with tailwindcss step 3
|
### Related page
https://mui.com/material-ui/integrations/interoperability/#tailwind-css
### Kind of issue
Broken demo
### Issue description
After creating a new nextjs project with the default options, if you want to be able to use both MUI and tailwindcss, you need to follow: https://mui.com/material-ui/integrations/nextjs/ and then to follow: https://mui.com/material-ui/integrations/interoperability/#tailwind-css
But at step 3 without adding `important`, it already works fine at this point. If you add the `important` it doesn't work.
### Context
_No response_
**Search keywords**: tailwindcss nextjs
|
ready to take,support: docs-feedback
|
low
|
Critical
|
2,786,377,964
|
ui
|
[feat]: remove/rm/delete components using CLI
|
### Feature description
As we have npx shadcn@latest add <component>, why not have a npx shadcn@latest remove/rm/delete <component> using the cli.
Here is how this feature would work:
1. User hits the npx shadcn@latest remove/rm/delete <component>.
2. Search if the component exists in the user's directory.
3. Only remove the component if it exists inside the UI folder
4. If it does not exist in the UI folder, throw error stating the component does not exist -> Add it using npx shadcn@latest add <component>.
### Affected component/components
None
### Additional Context
@shadcn Please check this out!
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs
|
area: request
|
low
|
Critical
|
2,786,420,509
|
rust
|
ICE: demand: index out of bounds
|
<!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_hir_typeck/src/demand.rs:409:53: 'index out of bounds: the len is 1 but the index is 1'', 'thread 'rustc' panicked at compiler/rustc_hir_typeck/src/demand.rs:409:53: 'index out of bounds: the len is 1 but the index is 1''
File: /tmp/im/2/a.rs
-->
snippet:
````rust
pub trait Future {}
pub trait Action {}
fn retry<A: Action>(action: A) -> impl Future {}
struct Core<T: Future> {}
impl<T: Future> Core<F> {
pub fn spawn(self) {}
}
fn main() {
let mut core = Core {};
for i in &[1, 2, 3, 4, 5] {
core.spawn(retry(action));
}
}
````
<details><summary><strong>original code</strong></summary>
<p>
original:
````rust
pub trait Future {
fn run(self);
}
impl<F> Future for F where Item: FnOnce() {
fn retry<A: Action>(action: A) -> impl Future<Item = (), Error = ()> {
action.run()
}
}
pub trait Action {
type Output: Future;
fn run(self) -> Self::Output;
}
impl<T: Action, F: FnOnce() -> T> Action for F {
type Output = T;
fn run(self) -> Self::Output {
self()
}
}
fn retry<A: Action>(action: A) -> impl Future {
action.handle()
}
struct Core<T: Future> {
vec: Vec<F>,
}
impl<T: Future> Core<F> {
pub fn spawn(self) where F: Future + 'static {
core.remove.push(f);
}
pub fn run(self) {
for f in self.vec.into_iter() {
f.run()
};
}
}
/*
The `nested` closure avoids the check of its lifetime here, if:
- the `nasted` closure is nested into the `action` closure, and
- the `action` closure is passed into the `retry` function, and
- the `retry` function take a generic by the `Action` trait argument, and
- the `Action` trait is implemented for an `Fn*` trait.
As a result, we get arbitrary values in variables and at best SIGSEGV.
*/
fn main() {
let mut core = Core { vec: Vec::new() };
for i in &[1, 2, 3, 4, 5] {
println!("Core::run", i);
let f = || { // The `lazy` closure
f()
};
let action = move || {
|| f() // The `nested` closure
};
core.spawn(retry(action));
}
core.run();
}
````
</p>
</details>
Version information
````
rustc 1.86.0-nightly (35c290817 2025-01-14)
binary: rustc
commit-hash: 35c2908177a17ca4e0acbc9013e42ee525ba155c
commit-date: 2025-01-14
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/35c2908177a17ca4e0acbc9013e42ee525ba155c/compiler/rustc_hir_typeck/src/demand.rs#L403-L415
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0412]: cannot find type `F` in this scope
--> /tmp/icemaker_global_tempdir.hbPlrcOQEpxc/rustc_testrunner_tmpdir_reporting.YPq7bZYDb2EU/mvce.rs:9:22
|
9 | impl<T: Future> Core<F> {
| ^
|
::: /home/matthias/.rustup/toolchains/master/lib/rustlib/src/rust/library/core/src/ops/function.rs:76:1
|
76 | pub trait Fn<Args: Tuple>: FnMut<Args> {
| -------------------------------------- similarly named trait `Fn` defined here
|
help: a trait with a similar name exists
|
9 | impl<T: Future> Core<Fn> {
| ~~
help: you might be missing a type parameter
|
9 | impl<T: Future, F> Core<F> {
| +++
error[E0425]: cannot find value `action` in this scope
--> /tmp/icemaker_global_tempdir.hbPlrcOQEpxc/rustc_testrunner_tmpdir_reporting.YPq7bZYDb2EU/mvce.rs:16:26
|
16 | core.spawn(retry(action));
| ^^^^^^ not found in this scope
error[E0392]: type parameter `T` is never used
--> /tmp/icemaker_global_tempdir.hbPlrcOQEpxc/rustc_testrunner_tmpdir_reporting.YPq7bZYDb2EU/mvce.rs:7:13
|
7 | struct Core<T: Future> {}
| ^ unused type parameter
|
= help: consider removing `T`, referring to it in a field, or using a marker such as `PhantomData`
error[E0277]: the trait bound `(): Future` is not satisfied
--> /tmp/icemaker_global_tempdir.hbPlrcOQEpxc/rustc_testrunner_tmpdir_reporting.YPq7bZYDb2EU/mvce.rs:5:35
|
5 | fn retry<A: Action>(action: A) -> impl Future {}
| ^^^^^^^^^^^ the trait `Future` is not implemented for `()`
|
help: this trait has no implementations, consider adding one
--> /tmp/icemaker_global_tempdir.hbPlrcOQEpxc/rustc_testrunner_tmpdir_reporting.YPq7bZYDb2EU/mvce.rs:1:1
|
1 | pub trait Future {}
| ^^^^^^^^^^^^^^^^
thread 'rustc' panicked at compiler/rustc_hir_typeck/src/demand.rs:409:53:
index out of bounds: the len is 1 but the index is 1
stack backtrace:
0: 0x76293fefec2a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h343181add7067502
1: 0x762940612da6 - core::fmt::write::h4a7aadae0a1bc1aa
2: 0x76294159c3d1 - std::io::Write::write_fmt::h1778835c3c67b7e7
3: 0x76293fefea82 - std::sys::backtrace::BacktraceLock::print::h6b4eb1026a458107
4: 0x76293ff01027 - std::panicking::default_hook::{{closure}}::h794153a458ec0a99
5: 0x76293ff00e10 - std::panicking::default_hook::haa2577842079db42
6: 0x76293f0657e8 - std[f87fc13b1c9b189d]::panicking::update_hook::<alloc[8c45d6cf9cfd903f]::boxed::Box<rustc_driver_impl[b5cea8d0259ab78a]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x76293ff01873 - std::panicking::rust_panic_with_hook::h3d89a5b3bc51855c
8: 0x76293ff0156a - std::panicking::begin_panic_handler::{{closure}}::h6218397a7f283626
9: 0x76293feff0f9 - std::sys::backtrace::__rust_end_short_backtrace::h1c3cb871f8987430
10: 0x76293ff0122d - rust_begin_unwind
11: 0x76293cba68d0 - core::panicking::panic_fmt::h9f9d58009f0431b1
12: 0x76293e6e03c9 - core::panicking::panic_bounds_check::h28c35d67f308dc8e
13: 0x76293f2e3303 - <core[60f4c6f00ff91c40]::slice::iter::Iter<&rustc_hir[d42cc4700ac42360]::hir::Expr> as core[60f4c6f00ff91c40]::iter::traits::double_ended::DoubleEndedIterator>::try_rfold::<(), core[60f4c6f00ff91c40]::iter::traits::iterator::Iterator::find_map::check<&&rustc_hir[d42cc4700ac42360]::hir::Expr, rustc_middle[7d18ff6d534e138c]::ty::Ty, <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::note_source_of_type_mismatch_constraint::{closure#3}>::{closure#0}, core[60f4c6f00ff91c40]::ops::control_flow::ControlFlow<rustc_middle[7d18ff6d534e138c]::ty::Ty>>
14: 0x76293f35d928 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::note_source_of_type_mismatch_constraint
15: 0x76293f386665 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::report_arg_errors
16: 0x7629407608d0 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_argument_types
17: 0x76294136e4d6 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
18: 0x762941357c0f - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_block
19: 0x76294135f4f3 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
20: 0x76294135305c - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_match::{closure#0}
21: 0x7629413609dd - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
22: 0x7629413591f1 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_block
23: 0x7629413641a8 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
24: 0x76294135305c - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_match::{closure#0}
25: 0x7629413609dd - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: 0x762941361773 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
27: 0x762941357d01 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_block
28: 0x76294135f4f3 - <rustc_hir_typeck[7437378b43a6ded6]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
29: 0x762940c5c4c0 - rustc_hir_typeck[7437378b43a6ded6]::check::check_fn
30: 0x762940c6617d - rustc_hir_typeck[7437378b43a6ded6]::typeck_with_fallback::{closure#0}
31: 0x762940c6418c - rustc_query_impl[cb4674ae863c9cf]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb4674ae863c9cf]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7d18ff6d534e138c]::query::erase::Erased<[u8; 8usize]>>
32: 0x762940c4ee8e - rustc_query_system[b56cf2761e1d5f32]::query::plumbing::try_execute_query::<rustc_query_impl[cb4674ae863c9cf]::DynamicConfig<rustc_data_structures[9413300d1cf0cd07]::vec_cache::VecCache<rustc_span[32ed8b15aa28102c]::def_id::LocalDefId, rustc_middle[7d18ff6d534e138c]::query::erase::Erased<[u8; 8usize]>, rustc_query_system[b56cf2761e1d5f32]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[cb4674ae863c9cf]::plumbing::QueryCtxt, false>
33: 0x762940c4e063 - rustc_query_impl[cb4674ae863c9cf]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
34: 0x762940c4dd1d - <rustc_middle[7d18ff6d534e138c]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[c09c58fe15d469e5]::check_crate::{closure#4}>::{closure#0}
35: 0x762940c4bdd1 - rustc_hir_analysis[c09c58fe15d469e5]::check_crate
36: 0x7629408f8d68 - rustc_interface[9cdea11426ea32ce]::passes::run_required_analyses
37: 0x7629415a025e - rustc_interface[9cdea11426ea32ce]::passes::analysis
38: 0x7629415a022f - rustc_query_impl[cb4674ae863c9cf]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb4674ae863c9cf]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7d18ff6d534e138c]::query::erase::Erased<[u8; 0usize]>>
39: 0x7629415c6215 - rustc_query_system[b56cf2761e1d5f32]::query::plumbing::try_execute_query::<rustc_query_impl[cb4674ae863c9cf]::DynamicConfig<rustc_query_system[b56cf2761e1d5f32]::query::caches::SingleCache<rustc_middle[7d18ff6d534e138c]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[cb4674ae863c9cf]::plumbing::QueryCtxt, false>
40: 0x7629415c5f4e - rustc_query_impl[cb4674ae863c9cf]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
41: 0x7629416433de - rustc_interface[9cdea11426ea32ce]::passes::create_and_enter_global_ctxt::<core[60f4c6f00ff91c40]::option::Option<rustc_interface[9cdea11426ea32ce]::queries::Linker>, rustc_driver_impl[b5cea8d0259ab78a]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
42: 0x762941635e16 - rustc_interface[9cdea11426ea32ce]::interface::run_compiler::<(), rustc_driver_impl[b5cea8d0259ab78a]::run_compiler::{closure#0}>::{closure#1}
43: 0x7629414b7647 - std[f87fc13b1c9b189d]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[9cdea11426ea32ce]::util::run_in_thread_with_globals<rustc_interface[9cdea11426ea32ce]::util::run_in_thread_pool_with_globals<rustc_interface[9cdea11426ea32ce]::interface::run_compiler<(), rustc_driver_impl[b5cea8d0259ab78a]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
44: 0x7629414b7ae0 - <<std[f87fc13b1c9b189d]::thread::Builder>::spawn_unchecked_<rustc_interface[9cdea11426ea32ce]::util::run_in_thread_with_globals<rustc_interface[9cdea11426ea32ce]::util::run_in_thread_pool_with_globals<rustc_interface[9cdea11426ea32ce]::interface::run_compiler<(), rustc_driver_impl[b5cea8d0259ab78a]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[60f4c6f00ff91c40]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
45: 0x7629414b90c1 - std::sys::pal::unix::thread::Thread::new::thread_start::h0b52359edc81c287
46: 0x76293b8a339d - <unknown>
47: 0x76293b92849c - <unknown>
48: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.86.0-nightly (35c290817 2025-01-14) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 4 previous errors
Some errors have detailed explanations: E0277, E0392, E0412, E0425.
For more information about an error, try `rustc --explain E0277`.
```
</p>
</details>
<!--
query stack:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
-->
|
I-ICE,T-compiler,C-bug,S-has-mcve,T-types,S-has-bisection
|
low
|
Critical
|
2,786,470,695
|
PowerToys
|
Invalid Esperanto character
|
### Microsoft PowerToys version
0.87.1
### Utility with translation issue
Quick Accent
### 🌐 Language affected
Esperanto
### ❌ Actual phrase(s)
Ǔ (U+01D3)
ǔ (U+01D4)
### ✔️ Expected phrase(s)
Ŭ (U+016C)
ŭ (U+016D)
### ℹ Why is the current translation wrong
Esperanto uses a different diacritic mark.
|
Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation
|
low
|
Minor
|
2,786,482,340
|
langchain
|
DOC: <A long term memory agent 'DOC: ' prefix>
|
### URL
https://python.langchain.com/docs/versions/migrating_memory/long_term_memory_agent/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Something is wrong with this example, when I run exactly same as in the documentation I keep get 429 error. Although I have a commercial license of OpenAI and there is enough dollars left. comes load_memories, not sure what is wrong
error --
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
During task with name 'load_memories' and id '0852441c-bcd1-ee97-f431-c3b9d0a980f2'
### Idea or request for content:
kindly correct the doc
|
🤖:docs
|
low
|
Critical
|
2,786,502,677
|
vscode
|
Inline Jupyter Interactive for python files
|
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I want inline jupyter interactive windows/boxes for .py files!
print("#Hello! Make inline interactive a thing!")
... #Hello! Make inline interactive a thing!
|
feature-request,interactive-window
|
low
|
Minor
|
2,786,508,663
|
ollama
|
Running OLLAMA_FLASH_ATTENTION=true with LoRA Models Returns: flash_attn is not compatible with LoRA
|
### What is the issue?
Hello, I use fine tuned LLMs that use LoRA, when activating OLLAMA_FLASH_ATTENTION=true ollama serve the fine tuned models do not work, the error received is:
```
llama_lora_adapter_set: flash_attn is not compatible with LoRA
panic: error applying lora from file
```
This error stops the model from running until you flag it false. Then it runs correctly when its turned to false.
I have found on the llama.cpp git that they edited and removed part of their code last week to fix this. Here was the same issue: https://github.com/ggerganov/llama.cpp/discussions/11097
Here is the fix that was added to llama.cpp:
https://github.com/ggerganov/llama.cpp/pull/11104/files
I'm not sure what I've done is correct in providing the above information (first ever report), but I found that Ollama uses Llama.cpp so I thought I'd find the solution directly.
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.5.5
|
bug
|
low
|
Critical
|
2,786,518,098
|
ant-design
|
取消表格border,并设置固定列和指定滚动区域高度后,在未达到滚动高度时,仍然会出现一个可滚动1px的滚动条
|
### Reproduction link
[](https://codesandbox.io/p/sandbox/gu-ding-lie-antd-5-23-1-forked-fd5kzv?workspaceId=ws_2SpvwKdLHLzTaCpuAok1Y4)
### Steps to reproduce
1、设置某一列固定
2、设置scroll.y
3、隐藏border,设置 .ant-table-cell {
border-bottom: none !important;
}
### What is expected?
未达到滚动高度时不出现滚动条
### What is actually happening?
未达到滚动高度时会出现滚动条
| Environment | Info |
| --- | --- |
| antd | 5.23.1 |
| React | 18.0.0 |
| System | Windows |
| Browser | chrome 131.0.6778.265 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
unconfirmed
|
low
|
Minor
|
2,786,523,986
|
neovim
|
nonblocking :global cmd
|
### Problem
Situation, 2 vertical splits with each buffer containing 200K lines of code and diff mode activated by running `:windo diffthis` .
When doing a global command like `:g/^\[/normal! da[` in a 200K line with, the editor freezes until done which could be 3 seconds for the edit operation and then ~20 seconds for the diff calculation.
How feasible is it to do this in a nonbocking mode (separate thread?) to allow user to interact with other buffers?
### Expected behavior
User should still be able to move to another windows and edit that while a long running command is executing in a different window/buffer.
|
enhancement,complexity:high,multiproc,async
|
low
|
Minor
|
2,786,525,621
|
transformers
|
Some weights of the model checkpoint at /models/DeepSeek-V3_bf16 were not used when initializing DeepseekV3ForCausalLM
|
### System Info
I use AutoModelForCausalLM.from_pretrained to load DeepSeek_V3, it raises below warning:

then I print the model state keys, it only has 60 layers, however, the deepseek v3 weight actual has 61 layers, the last layer is missing.

How to fix it? Thank you~
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = AutoModelForCausalLM.from_pretrained(
"path/to/deepseek_v3_bf16,
device_map="cpu",
torch_dtype="auto",
trust_remote_code=True,
)
print(model.state_dict().keys())
### Expected behavior
AutoModelForCausalLM.from_pretrained could load deepseek v3 correctly.
|
bug
|
low
|
Minor
|
2,786,539,021
|
PowerToys
|
Unknown Exception happened (System.Reflection.TargetInvocationException)
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Last evening I had my laptop abnormally eating CPU with process "SYSTEM" that cannot be terminated. During the night hours the laptop was left unattended.
This morning I saw that there had been a lot of "rundll" warning windows, and the one big error window from PowerToys.
I've been using PT for many years, but such kind of error I saw for the 1st time ever.
Attached are the details:
- **PowerToys issue window.png** = the screenshot from PT with the Exception details.
- **2025-01-14.txt** = the PT log file (taken from path "C:\Users\User\AppData\Local\Microsoft\PowerToys\PowerToys Run\Logs\0.87.1.0\2025-01-14.txt")
- **error_from_PT_Exception_Window.txt** = the text content copy-pasted from the PowerToys Exception Window, in which it suggests to file a bug on GitHub (what I'm hereby doing).
Please fix the underlying bug and explain in simple terms why had this bug appear.
### ✔️ Expected Behavior
When the laptop where PT is launched is left unattended through the night, no error must happen, no warning raised, and "system" process must not eat a lot of CPU.
### ❌ Actual Behavior
SYSTEM eats CPU, errors and warning when unattended for hours.
### Other Software

[2025-01-14.txt](https://github.com/user-attachments/files/18407483/2025-01-14.txt)
[error_from_PT_Exception_Window.txt](https://github.com/user-attachments/files/18407494/error_from_PT_Exception_Window.txt)
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,786,558,832
|
godot
|
Error when creating a new project with specific steps (Create Folder manipulation)
|
### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8], v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.3.stable - Windows 10.0.26100 - Vulkan (Forward+) - dedicated Radeon (TM) RX 480 Graphics - AMD Ryzen 7 2700 Eight-Core Processor (16 Threads)
### Issue description
When trying to create a new project in Godot, if I leave the "Project Name" field empty and uncheck the "Create Folder" option, then check it again and attempt to enter a project name, I encounter the following errors:
- "You cannot save a project at the selected path." (if saving in Documents)
- "The selected path is not empty." (if saving in any other folder)
### Steps to reproduce
1. Open Godot.
2. Click on "Create"
3. Leave the "Project Name" field empty.
4. Uncheck the "Create Folder" option.
5. Check the "Create Folder" option again.
6. Enter a project name.
### Minimal reproduction project (MRP)
N/A
|
bug,topic:editor,usability
|
low
|
Critical
|
2,786,562,693
|
yt-dlp
|
Support for https://content-static.cctvnews.cctv.com/snow-book/video.html
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Worldwide
### Example URLs
https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=14085236271167952285
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=12379996551342441886
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=7331185547682467513
### Provide a description that is worded well enough to be understood
Hope to support downloading this type of URL:
https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=14085236271167952285
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=12379996551342441886
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=7331185547682467513
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.12 from yt-dlp/yt-dlp [dade5e35c] (pip)
[debug] Python 3.9.19 (CPython x86_64 64bit) - Linux-5.4.119-19.0009.37-x86_64-with-glibc2.28 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.28)
[debug] exe versions: ffmpeg 6.0.1-static (setts), ffprobe N-112747-g67a2571a55
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.06.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.12 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.12 from yt-dlp/yt-dlp)
[CCTV] Extracting URL: https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671
[CCTV] video: Downloading webpage
ERROR: [CCTV] video: Unable to extract video id; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/root/.conda/envs/faster-whisper-webui/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/root/.conda/envs/faster-whisper-webui/lib/python3.9/site-packages/yt_dlp/extractor/cctv.py", line 142, in _real_extract
video_id = self._search_regex(
File "/home/root/.conda/envs/faster-whisper-webui/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
```
|
site-request,triage
|
low
|
Critical
|
2,786,563,572
|
godot
|
[4.x] Big Problems on Android
|
### Tested versions
4.3, 4.4 dev 1..7 (All)
...
Android 11 (SDK 30)
Android 12 (SDK 31)
Android 13 (SDK 33)
Android 14 (SDK 34)
### System information
Working in Linux / Ubuntu 22.04
### Issue description
There are a lot of incomprehensible problems that cannot be identified.

*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
pid: 0, tid: 27913 >>> ru.skanersoft.outline <<<
```
backtrace:
#00 pc 0x0000000001bfd4b8 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#01 pc 0x0000000001bfc010 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#02 pc 0x0000000002ea50e4 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#03 pc 0x0000000002ea4f48 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#04 pc 0x0000000002ea4d24 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#05 pc 0x0000000002e4253c /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#06 pc 0x0000000000ea4248 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#07 pc 0x0000000000e62ec0 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so
#08 pc 0x0000000000e634a4 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/lib/arm64/libgodot_android.so (Java_org_godotengine_godot_GodotLib_step+320)
#09 pc 0x000000000038abf8 /data/misc/apexdata/com.android.art/dalvik-cache/arm64/boot.oat (art_jni_trampoline+104)
#10 pc 0x00000000003c7c90 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/oat/arm64/base.odex (org.godotengine.godot.gl.GodotRenderer.onDrawFrame+96)
#11 pc 0x00000000003ad964 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/oat/arm64/base.odex (org.godotengine.godot.gl.GLSurfaceView$GLThread.guardedRun+2804)
#12 pc 0x00000000003ae6a8 /data/app/~~gwPKwH1yoT9ZPszmBS4RjA==/ru.skanersoft.outline-0dfhrl8g2FVrGAv_z2O92Q==/oat/arm64/base.odex (org.godotengine.godot.gl.GLSurfaceView$GLThread.run+312)
#13 pc 0x000000000036db74 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+612)
#14 pc 0x0000000000359324 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+132)
#15 pc 0x0000000000944438 /apex/com.android.art/lib64/libart.so (art::detail::ShortyTraits<(char)86>::Type art::ArtMethod::InvokeInstance<(char)86>(art::Thread*, art::ObjPtr<art::mirror::Object>, art::detail::ShortyTraits<>::Type...)+60)
#16 pc 0x00000000006209f4 /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallback(void*)+1344)
#17 pc 0x00000000006204a4 /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallbackWithUffdGc(void*)+8)
#18 pc 0x0000000000101d5c /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+204)
#19 pc 0x0000000000095bc0 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
```


A graph of errors and crashes that looks like this after updating the game to version 4.3.
The test builds on 4.4 don't change anything.

And the saddest thing for me is Google's message that the game will no longer be included in the recommendations and will be less visible to players.

### Steps to reproduce
I do not know what to back up here... all crash statistics are taken from Google Play.
### Minimal reproduction project (MRP)
I do not know what to back up here... all crash statistics are taken from Google Play.
|
bug,platform:android,topic:porting,crash
|
low
|
Critical
|
2,786,578,397
|
go
|
cmd/relnote: unrecognized failures
|
```
#!watchflakes
default <- pkg == "cmd/relnote" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726172893393941921)):
FAIL cmd/relnote [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
NeedsInvestigation
|
low
|
Critical
|
2,786,583,388
|
tensorflow
|
Yolov8-seg.pt segmentation model is deployed on Android after training
|
**System information**
- Android Device information (use `adb shell getprop ro.build.fingerprint`
if possible):
”“”
vivo/PD2020/PD2020:10/QP1A.190711.020/compiler10141555:user/release-keys
“”“
- TensorFlow Lite in Play Services SDK version (found in `build.gradle`):
”“”
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.4.0'
implementation 'org.tensorflow:tensorflow-lite-gpu-delegate-plugin:0.4.0'
implementation 'org.tensorflow:tensorflow-lite-gpu:2.9.0'
“”“
- Google Play Services version
(`Settings` > `Apps` > `Google Play Services` > `App details`):
**Standalone code to reproduce the issue**
”“Here is the full code of the program”“
"""
public class MainActivity extends AppCompatActivity {
private String MODEL = "best_float32_metadata.tflite";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.a3);
bitmap = imageScale(bitmap, 640, 640);
TensorImage tensorImage = TensorImage.fromBitmap(bitmap);
Log.e("HENG", String.valueOf(tensorImage.getBuffer()));
Log.e("HENG", String.valueOf(tensorImage.getTensorBuffer()));
Log.e("HENG", String.valueOf(tensorImage.getDataType()));
Log.e("HENG", String.valueOf(tensorImage.getColorSpaceType()));
ImageSegmenter.ImageSegmenterOptions options = ImageSegmenter.ImageSegmenterOptions.builder()
.setBaseOptions(BaseOptions.builder().build())
.setOutputType(OutputType.CONFIDENCE_MASK)
.build();
ImageSegmenter imageSegmenter = null;
try {
imageSegmenter = ImageSegmenter.createFromFileAndOptions(this, MODEL, options);
} catch (IOException e) {
throw new RuntimeException(e);
}
List<Segmentation> results = imageSegmenter.segment(tensorImage);
Log.e("HENG", "HELLO: "+results.toString());
}
// 图像缩放方法
public static Bitmap imageScale(Bitmap bitmap, int new_w, int new_h) {
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap, new_w, new_h, true);
return scaledBitmap;
}
}
"""
**Any other info / logs**
Please allow me to repeat my question. Thank you,
First, (I may have solved the first problem, but I am not sure) I trained my data set with yolov8-seg.pt to get a model. I converted it to tflite format, copied the 32-bit model generated by best_float32.tflite into asssets in Android, and then modified the path of model to run the following original code. I got two error messages: "1 Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.2、java.lang.IllegalStateException: Error getting native address of native library: task_vision_jni”, After I searched, I found“ https://stackoverflow.com/questions/66727627/failed-to-initialize-detector-input-tensor-has-type-ktflitefloat32-ml-kit ”I got a copy of the code in this link. After I tried to run it, I put the model I got into assets again. After running, it was not the above error message (the error message is the second problem).
Second, (from the follow-up to the first question), I also got two errors after running the modified model "best_float32_metadata.tflite". The first one is "java.lang.illegalargumentexception: error occurred when initializing imagesegment: image segmentation models are expected to have only 1 output, found 2". It says that the model actually returns two outputs, which is a very important question. The second one seems to be the same as the first one, "java.lang.runtimeexception: unable to start action"activity componentinfo{com.example.yoloseg_android/com.example.yoloseg_android.mainactivity}: java.lang.illegalstateexception: error getting native address of native library: task_vision_jni ", I don't understand this.
These are my two problems. I think the second problem should be solved.
<!-- Failed to upload "YoloSegAndroid.zip" -->
Note: I can use the deeplabv3.tflite model officially provided by tensorflow to get the output smoothly
|
type:support,comp:lite,Android
|
low
|
Critical
|
2,786,589,185
|
langchain
|
Hangs getting of distinct list of edge labels
|
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
SELECT *
FROM cypher('graph', $$
MATCH ()-[e]**->**() RETURN collect(distinct label(e)) as labels
$$) as (e agtype);
### Error Message and Stack Trace (if applicable)
_No response_
### Description
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/graphs/age_graph.py
Line 158 contains wrong MATCH statement definition.
It should be with a right arrow:
MATCH ()-[e]**->**() RETURN collect(distinct label(e)) as labels
### System Info
A Docker container with the latest version of langchain.
|
🤖:bug
|
low
|
Critical
|
2,786,592,235
|
next.js
|
Memory spike issue with Next.js 15.1.4 on Azure
|
### Verify canary release
- [ ] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 18:51:28 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T8112
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.13.1
npm: 10.8.1
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 14.2.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.4.5
Next.js Config:
output: standalone
```
### Which example does this report relate to?
This issue is not related to any specific example in the examples folder. The problem occurs in a general Next.js application deployed on Azure.
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
We are experiencing a significant memory spike and auto-scaling issues when using Next.js 15.1 in our Azure deployments. Memory usage increases unpredictably under typical traffic conditions, leading to higher resource utilization and triggering unnecessary auto-scaling.
When downgrading to Next.js 14.2, these issues are resolved, and memory usage returns to stable levels. This suggests a regression introduced in version 15.1.
Graphs comparing memory usage for versions 15.1 and 14.2 are attached below for reference.


### Expected Behavior
Memory usage should remain stable and consistent under typical traffic conditions when using Next.js 15.1, similar to the behavior observed in Next.js 14.2.
### To Reproduce
Deploy a Next.js 15.1 application on Azure with typical production traffic patterns.
Monitor the memory usage and auto-scaling behavior using Azure's monitoring tools.
Observe that memory usage increases significantly and unpredictably, causing auto-scaling to trigger even under normal load.
Downgrade the application to Next.js 14.2.
Re-monitor the application, noticing that memory usage stabilizes and auto-scaling behaves as expected.


|
linear: next
|
medium
|
Critical
|
2,786,602,906
|
PowerToys
|
Keep Awake does not work without screen on on some devices
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Awake
### Steps to reproduce
I have two devices: an Intel gen13 laptop and an Intel gen7 desktop with Powertoys installed on both.
I use Keep Awake on both of them, but it behaves differently.
On my desktop, if I choose Keep awake indefinitely, it indeed keeps it awake, but on my laptop it does not have any effect, the laptop goes to sleep as normal.
But if I choose Keep screen on, now the laptop never goes to sleep.
Is there any option to keep the laptop awake without keeping the screen on as well? Why is it different from the Desktop PC?
Steps: Activate Keep Awake indefinitely without any other option.
### ✔️ Expected Behavior
The Laptop should not go to sleep
### ❌ Actual Behavior
The laptop enters sleep mode as normal.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,786,603,852
|
PowerToys
|
Session Saving and Restoration Feature for Seamless Workflow Continuity
|
### Description of the new feature / enhancement
I propose a feature for PowerToys that allows users to save all currently open programs, files, and folders in a workspace. This saved session could be manually activated through a shortcut or similar action. After a system restart, it should also be possible to restore the session through a deliberate action, such as a shortcut. This would enable users to seamlessly continue their work without the need to manually reopen each item.
### Scenario when this would be used?
A typical scenario would be when someone is working on a complex task requiring multiple open files and Explorer windows. Instead of manually restoring everything after a system restart, the user could consciously save all open programs and files with a simple shortcut. After restarting the PC, the saved session could be reopened through a shortcut or similar action, ensuring the workflow remains uninterrupted. This would be particularly beneficial for users working on complex projects, offering significant time savings and convenience.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,786,628,721
|
go
|
cmd/compile: opt: generate conditional comparisons for || and && conditions
|
### Go version
go1.23.4 linux/arm64
### Output of `go env` in your module/workspace:
```shell
GOARCH='arm64'
```
### What did you do?
While investigating the performance of json package, I found `unquoteBytes()` spends extra time with the `if c == '\\' || c == '"' || c < ' '` condition ([src/encoding/json/decode.go#L1209](https://github.com/golang/go/blob/master/src/encoding/json/decode.go#L1209)), which could be improved by condidional comparisons (such as ARM64 `CCMP` instruction).
### What did you see happen?
Currently, Go compiler can generate conditonal assignments (e.g., `CSET`, `CSEL`, `CSINC`), but it can not generate conditional comparisons (i.e., `CCMP`), which let you combine the results of multiple comparisons so you can perform a single test at the end. (see more details: [The AArch64 processor (aka arm64), part 16: Conditional execution - The Old New Thing](https://devblogs.microsoft.com/oldnewthing/20220817-00/?p=106998))
E.g., following are Go tests (https://godbolt.org/z/q6YMT7Msa) and corresponding C tests (https://godbolt.org/z/4qshde78o, compiled by GCC -O1). Go compiler generates `CMP;BEQ` insteald of `CPM;CCMP` for `test2()`:
The Go tests:
```go
func test1(c int) (r int) {
if c > 0 {
r = 1
}
return
}
// CMP $0, R0
// CSET GT, R0
// RET
func test2(a, b, c int) (r int) {
if c == '\\' || c == '"' || c < ' ' {
r = a
} else {
r = b
}
return
}
// CMP $92, R2
// BEQ pc28
// CMP $34, R2
// BEQ pc28
// CMP $32, R2
// BLT pc28
// MOVD R1, R0
// pc28:
// RET (R30)
```
The C tests:
```c
int test1(int c) {
if (c > 0) {
return 1;
}
return 0;
}
// cmp w0, 0
// cset w0, gt
// ret
int test2(int a, int b, int c) {
if (c == '\\' || c == '"' || c < ' ') {
return a;
} else {
return b;
}
}
// cmp w2, 92
// mov w3, 34
// ccmp w2, w3, 4, ne
// ccmp w2, 31, 4, ne
// csel w0, w1, w0, gt
// ret
```
### Measure performance
Conditional comparisons should generally improve the performance of conjunction/disjunction of conditions by `&&`/`||` operators on ARM64 machines.
Following cases are simplified from `unquoteBytes`. I tested on ARM64 Neoverse-N1 (AmpereComputing Altra and AWS Graviton2 is similar), the C case (`GCC -O3` generates `CCMP`) is much faster than the Go case (go1.23.4 generates `CMP`): 5.69s vs. 9.23s.
The Go Test:
```go
package main
//go:nosplit
//go:noinline
func unquoteBytes(s []byte, len int) int {
s = s[1:]
r := 0
for r < len {
c := s[r]
if c == '\\' || c == '"' || c < ' ' {
break
}
r++
}
return r
}
func main() {
data := []byte(`"hello, world"`)
len := len(data)
for i := 0; i < 1000*1000*500; i++ {
unquoteBytes(data, len)
}
}
```
The C Test
```c
#include <string.h>
__attribute__((noinline, noipa))
int unquoteBytes(const char *data, int len) {
data = data + 1;
int r = 0;
while (r < len) {
char c = data[r];
if (c == '\\' || c == '"' || c < ' ') {
break;
}
r++;
}
return r;
}
int main() {
const char *data = "\"hello, world\"";
unsigned len = strlen(data);
for (int i = 0; i < 1000 * 1000 * 500; i++) {
unquoteBytes(data, len);
}
}
```
As C may have less overhead than Go in function call and `main`, let's just compare the linux-perf samples of the loop:
Go results:
```armasm
; 95.89% 35560 mytest mytest [.] main.unquoteBytes
; 3.97% 1473 mytest mytest [.] main.main
7381 : 73260: add x0, x0, #0x1 ; loop header
699 : 73264: cmp x3, x0
0 : 73268: b.le 73288 ; loop exit
9698 : 7326c: ldrb w2, [x1, x0]
0 : 73270: cmp w2, #0x5c
0 : 73274: b.eq 73288 ; loop exit
8688 : 73278: cmp w2, #0x22
0 : 7327c: b.eq 73288 ; loop exit
7908 : 73280: cmp w2, #0x20
0 : 73284: b.cs 73260
661 : 73288: ret
```
C results:
```armasm
; 99.76% 22829 simp simp [.] unquoteBytes
; 0.20% 45 simp simp [.] main
12051 : 4006f8: cmp x6, x2
0 : 4006fc: b.eq 400724 ; loop exit
121 : 400700: mov x2, x4
431 : 400704: ldrb w3, [x0, x2]
547 : 400708: add x4, x2, #0x1
0 : 40070c: cmp w3, #0x5c
221 : 400710: ccmp w3, w5, #0x4, ne
1233 : 400714: ccmp w3, #0x1f, #0x0, ne
6381 : 400718: b.hi 4006f8 ; loop header
991 : 40071c: sub w0, w2, #0x1
0 : 400720: ret
0 : 400724: mov w0, w1
0 : 400728: ret
```
The assembly instructions are much similar except the CMP and CCMP. If we just count samples related to the loop, the C case (`CCMP`) vs the Go case (`CCMP`): **21976 vs 35035 (+47%)**.
Since the input data may affect performance, I also tested data like `"\hello, world"`, so the loop could break at the 1st comparison against `\`, then CCMP is still faster than CMP (1.00s vs. 1.29s).
### What did you expect to see?
Could we enhance Go compiler to generate `CCMP`?
BTW. I searched and didn't find any issue about conditional instructions (there is just an old issue #6011 about failing to generate conditional move).
|
Performance,NeedsInvestigation,compiler/runtime,Implementation
|
low
|
Major
|
2,786,636,513
|
deno
|
missing Node FS APIs
|
These APIs are missing in `node:fs` module:
- [ ] fchown
- [ ] fchownSync
- [ ] fchmod
- [ ] fchmodSync
- [ ] glob
- [ ] globSync
- [ ] lchmod
- [ ] lchmodSync
- [ ] openAsBlob
Note: This only lists top level APIs in `node:fs` module. The missing `FileHandle` methods are tracked in #25554
|
node compat
|
low
|
Minor
|
2,786,638,181
|
vscode
|
Luminosity contrast ratio of keyboard focus indicator on Authorize button fails to meet the required ratio of 3:1: A11y_VS Code extension for API Center_Generate HTTP file_Authorize_Non Text Contrast.
|
### GitHub Tags
#A11yTCS; #A11ySev2; #Win32; #WCAG1.4.11; #GH_VSCodeextensionforAPICenter_Win32_Apr2024; #DesktopApp; #Visual Studio Code Client; #A11yMAS; #NonText Contrast; #A11YWCAG2.1;
### Environment Details:
Application Name: VS Code Extension for API Center
OS: Windows 11 Enterprise 23H2 (OS build 22631.3296)
### Repro Steps:
1. Create an API center in azure portal.
2. Open Visual studio Code.
3. Tab till API center extension in the left side panel and press enter.
4. Tab till definition under the Api name which you have created, and press enter.
5. The name which you have provided to the definition will appear for ex-"openapi4", right click on it.
6. Go to "Generate HTTP file" using the down arrow and press enter.
7. An HTTP file will appear.
8. Tab till Authorize button.
9. Verify whether the Luminosity contrast ratio of keyboard focus indicator on Authorize button meet the required ratio of 3:1 or not.
### Actual Result:
Luminosity contrast ratio of keyboard focus indicator on Authorize button fails to meet the required ratio of 3:1
Note: This issue is observed throughout the feature.
### Expected Result:
Luminosity contrast ratio of visual focus indicator on authorize button should be greater than or equal to required ratio of 3:1.
### User Impact:
People with low vision often have difficulty perceiving tab focus that have insufficient contrast if Luminosity contrast ratio of visual focus indicator fails to meet the required ratio of 3:1. This can be exacerbated if the person has a color vision deficiency that lowers the contrast even further. Providing a lightness difference of 3:1 or greater can make these items more distinguishable when the person does not see a full range of colors
### Attachment:

|
bug,themes,accessibility
|
low
|
Minor
|
2,786,653,515
|
vscode
|
extension get wrong resource in context key when right click a folder with subfolder
|
For below folder structure 'builtIn\res' , if you right click on the 'builtIn' not 'bulitIn\res', then you use the 'Inspect Context Key' command to show the context , the 'resource' in the context keys is always 'builtIn\res' not 'builtIn' .
<img width="111" alt="Image" src="https://github.com/user-attachments/assets/c4be7c28-e05a-4fdd-8469-998e560a4637" />
- VS Code Version:
- OS Version:
Version: 1.96.2 (user setup)
Commit: fabdb6a30b49f79a7aba0f2ad9df9b399473380f
Date: 2024-12-19T10:22:47.216Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.26100
Steps to Reproduce:
1. run 'Inspect Context Key' command
2. right click on a folder which contains a subfolder in the same row in file explorer
3. check the 'resource' in context keys
|
bug,file-explorer
|
low
|
Minor
|
2,786,726,741
|
flutter
|
Break on 'ImpellerValidationBreak' to inspect point of failure: Could not allocate descriptor sets: ErrorOutOfPoolMemory
|
### Steps to reproduce
1. open app
### Expected results
don't crash with app
### Actual results
crash
### Code sample
no sample
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[ERROR:flutter/impeller/renderer/backend/vulkan/descriptor_pool_vk.cc(116)] Break on 'ImpellerValidationBreak' to inspect point of failure: Could not allocate descriptor sets: ErrorOutOfPoolMemory
[ +1 ms] E/flutter (14747): [ERROR:flutter/impeller/renderer/backend/vulkan/descriptor_pool_vk.cc(116)] Break on 'ImpellerValidationBreak' to inspect point of failure: Could not allocate descriptor sets: ErrorOutOfPoolMemory
[ ] F/libc (14747): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70614d687361485f in tid 14780 (1.raster), pid 14747 (t.sleep.android)
[ +328 ms] *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
[ ] Build fingerprint: 'Android/sdk_phone64_arm64/emu64a:15/AE3A.240806.019/12368160:userdebug/test-keys'
[ ] Revision: '0'
[ ] ABI: 'arm64'
[ ] Timestamp: 2025-01-14 17:55:17.130641397+0800
[ ] Process uptime: 3214s
[ ] Cmdline: com.difint.sleep.android
[ ] pid: 14747, tid: 14780, name: 1.raster >>> com.difint.sleep.android <<<
[ ] uid: 10158
[ ] tagged_addr_ctrl: 0000000000000001 (PR_TAGGED_ADDR_ENABLE)
[ ] pac_enabled_keys: 000000000000000f (PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY, PR_PAC_APDBKEY)
[ ] signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70614d687361485f
[ ] x0 b400007c15e5a470 x1 b400007c25ec7280 x2 0000000000000010 x3 0000000000000000
[ ] x4 8e120a9c6723b5ae x5 b400007c15e5a4f0 x6 0000000000000000 x7 b400007cf5ed4eaa
[ ] x8 0000000000000040 x9 0000000000000006 x10 0000000000000001 x11 0000000000000001
[ ] x12 0000000000000018 x13 8e38e38e38e38e39 x14 0000000000000048 x15 0000000000000002
[ ] x16 0000000000000000 x17 70614d687361485f x18 0000007a3b060000 x19 0000000000000001
[ ] x20 b400007c15e5a430 x21 0000000000000000 x22 b400007c15e5a470 x23 0000000000000002
[ ] x24 b400007d25e4ccf0 x25 b400007c45df8c70 x26 0000000000000000 x27 b400007c45df8c80
[ ] x28 0000007acc63ba80 x29 0000007acc639c20
[ ] lr 0000007ac08cda2c sp 0000007acc639c20 pc 0000007ac0885db4 pst 0000000060000000
[ ] 40 total frames
[ ] backtrace:
[ ] #00 pc 000000000007fdb4 /vendor/lib64/hw/vulkan.ranchu.so (gfxstream::vk::doEmulatedDescriptorWrite(VkWriteDescriptorSet const*, gfxstream::vk::ReifiedDescriptorSet*)+308) (BuildId: 97175a87159c1611c26e784dab8676ef)
[ ] #01 pc 00000000000c7a28 /vendor/lib64/hw/vulkan.ranchu.so (gfxstream::vk::ResourceTracker::on_vkUpdateDescriptorSets(void*, VkDevice_T*, unsigned int, VkWriteDescriptorSet const*, unsigned int, VkCopyDescriptorSet const*)+340) (BuildId: 97175a87159c1611c26e784dab8676ef)
[ ] #02 pc 000000000007df10 /vendor/lib64/hw/vulkan.ranchu.so (gfxstream_vk_UpdateDescriptorSets+716) (BuildId: 97175a87159c1611c26e784dab8676ef)
[ ] #03 pc 00000000020c144c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #04 pc 0000000002060f74 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #05 pc 0000000002086bac /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #06 pc 000000000205438c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #07 pc 0000000002053d78 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #08 pc 0000000002059068 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #09 pc 0000000001d1228c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #10 pc 000000000205a144 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #11 pc 0000000001d12f24 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #12 pc 0000000001d127e8 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #13 pc 000000000205a144 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #14 pc 0000000001d12f24 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #15 pc 0000000001d1228c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #16 pc 0000000001d12738 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #17 pc 000000000205b0e4 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #18 pc 0000000002151450 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #19 pc 0000000001fd73ac /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #20 pc 0000000001fd728c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #21 pc 0000000001fd7314 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #22 pc 00000000020dc93c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #23 pc 00000000020dc3f0 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #24 pc 00000000020db28c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #25 pc 00000000020dbc60 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #26 pc 00000000020dd834 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #27 pc 00000000020db6fc /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #28 pc 00000000020db494 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #29 pc 00000000020e9c9c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #30 pc 0000000001cacfe8 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #31 pc 0000000001cb2a28 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #32 pc 0000000000013498 /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+1520) (BuildId: eb16d925d301dcbe28e761592a8de52d)
[ ] #33 pc 000000000001f0c0 /system/lib64/libandroid.so (ALooper_pollOnce+100) (BuildId: 4af7b1a4467bbba7d937b317a37d0e9b)
[ ] #34 pc 0000000001cb29b0 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #35 pc 0000000001cacf34 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #36 pc 0000000001cb0c94 /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #37 pc 0000000001cb0b2c /data/app/~~Aff9XZ7aGVBk-5jm5eYAkA==/com.difint.sleep.android-3ZQMYjF0jsseuoI_NX1zTQ==/base.apk!libflutter.so (offset 0x1648000) (BuildId: b0c8ea18145e86f27eccbf0956cff58f847fee5b)
[ ] #38 pc 000000000006c354 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+196) (BuildId: 1b9fecf834d610f77e641f026ca7269b)
[ ] #39 pc 000000000005efc4 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64) (BuildId: 1b9fecf834d610f77e641f026ca7269b)
[ +71 ms] Service protocol connection closed.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.1.1 24B2091 darwin-arm64, locale zh-Hans-CN)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[!] Android Studio (not installed)
[✓] IntelliJ IDEA Community Edition (version 2024.3.1.1)
[✓] Connected device (4 available)
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[!] Network resources
✗ An HTTP error occurred while checking "https://github.com/": Connection closed before full header was received
```
</details>
|
waiting for customer response,in triage
|
medium
|
Critical
|
2,786,745,716
|
rust
|
Compilation of large project taking much longer after 1.84 (monomorphization)
|
<!--
Thank you for filing a regression report! 🐛 A regression is something that changed between versions of Rust but was not supposed to.
Please provide a short summary of the regression, along with any information you feel is relevant to replicate it.
-->
### Code
I have a big private project and we try to stay current on rust versions. Upon trying 1.84 I saw compilation times grow about 3 times. I already seen this behavior in other rust versions which made me have to skip it, for example 1.82.
I self-profiled the compile in 1.83 and 1.84 and diffed, so I got to this (https://gist.github.com/lsunsi/7d301c7e332f50a734647d3aff0efbdc).
I'm not sure how useful it is, but there we go. I can post the prof data as well if it's useful.
Further, I'll try to bisect and get back with more information.
### Version it worked on
<!--
Provide the most recent version this worked on, for example:
It most recently worked on: Rust 1.47
-->
It most recently worked on: 1.83
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Version with regression
<!--
Provide the version you are using that has the regression.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
<!--
Did the compiler crash? If so, please provide a backtrace.
-->
</p>
</details>
<!--
If you know when this regression occurred, please add a line like below, replacing `{channel}` with one of stable, beta, or nightly.
@rustbot modify labels: +regression-from-stable-to-stable-regression-untriaged
-->
@rustbot modify labels: +regression-from-stable-to-stable-regression-untriaged
|
I-compiletime,T-compiler,C-bug,I-prioritize,regression-untriaged
|
high
|
Critical
|
2,786,754,183
|
electron
|
setOverlayIcon blurry on scaled resolutions
|
### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10, Windows 11
### What arch are you using?
x64
### Last Known Working Electron version
-
### Expected Behavior
Overlay icon scales following desktop DPI when increasing scaling (like the app icon).
### Actual Behavior
Overlay icon looks blurry when OS resolution is scaled, looks like it always uses a 16x16 although a larger one is provided.

### Testcase Gist URL
_No response_
### Additional Information
I would expect the overlay icon to automatically adapt to the desktop scaling, but it would be acceptable to specify multiple icons too, or different icons for each scale step.
|
platform/windows,bug :beetle:,blocked/need-repro,33-x-y
|
low
|
Critical
|
2,786,767,376
|
transformers
|
use_liger_kernel requires much more GPU memory during evaluation than training
|
### System Info
I found when enabling use_liger_kernel=True, it does reduce GPU memory for training. however, when dealing with evaluation, I found it requires much more GPU memory than training, even though the per_device_eval_batch_size is smaller than per_device_train_batch_size, and seq_lengths are similar.

Architecture/Model:
AutoModelForSequenceClassification - Qwen/Qwen2.5-1.5B (it happens on all qwen2.5 models, including 0.5b to 32b ones);
Specific Setting:
--per_device_train_batch_size 4 --gradient_accumulation_steps 4 --per_device_eval_batch_size 1 --bf16 --max_length 4096 --gradient_checkpointing True --group_by_length True --use_liger_kernel True --attn_implementation flash_attention_2
Unnecessary but things you might want to know:
I use deepspeed zero(1/2/3), yet I found this issue also exists when running with ddp.
People who could help:
@muellerzr @SunMarc
### Who can help?
@muellerzr @SunMarc
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce:
Simply follow this trl [reward modeling example](https://github.com/huggingface/trl/blob/main/examples/scripts/reward_modeling.py).
### Expected behavior
I expect enabling use_liger_kernel=True does not occupy much more GPU memory for evaluation than training.
|
bug
|
low
|
Minor
|
2,786,784,159
|
flutter
|
Google Maps plugin seemingly doesn't apply device pixel ratio on the web (or applies it twice?)
|
### Steps to reproduce
1. Write (or use a preexisting) Flutter app that uses `pkg:google_maps_flutter`.
2. In the `GoogleMap` widget, provide an initial camera position with a set zoom level
```dart
return GoogleMap(
initialCameraPosition: CameraPosition(target: LatLng(0, 0), zoom: 0),
);
```
3. Add code to print out the resolution-independent size of the screen: `debugPrint(MediaQuery.sizeOf(context));`
4. Run the app on Android or iOS first, to see how it looks there.
5. Now run the app on the web
### Expected results
When the resolution-independent size of the screen is roughly the same, I expect the map to show roughly the same area (the zoom is the same, the viewport size is the same).
### Actual results
iPhone landscape (with `Size(932.0, 430.0)`)

iPad landscape

Chrome window on macOS (with `Size(1200.0, 698.0)`)

When you compare iPhone with the Chrome window, the size is comparable (+28% horizontal, +62% vertical) but the zoom seems to be more than 3x off.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:google_maps_flutter/google_maps_flutter.dart';
class BasicStaticSample extends StatelessWidget {
const BasicStaticSample({super.key});
@override
Widget build(BuildContext context) {
Size size = MediaQuery.sizeOf(context);
double deviceWidth = size.width * MediaQuery.devicePixelRatioOf(context);
double deviceHeight = size.height * MediaQuery.devicePixelRatioOf(context);
debugPrint("$size");
debugPrint("$deviceWidth");
debugPrint("$deviceHeight");
return GoogleMap(
initialCameraPosition: CameraPosition(target: LatLng(0, 0), zoom: 0),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
See above.
</details>
### Logs
<details open><summary>Logs</summary>
```console
% flutter run -d chrome --verbose
[ +5 ms] executing: sw_vers -productName
[ +18 ms] Exit code 0 from: sw_vers -productName
[ ] macOS
[ ] executing: sw_vers -productVersion
[ +8 ms] Exit code 0 from: sw_vers -productVersion
[ ] 14.7
[ ] executing: sw_vers -buildVersion
[ +7 ms] Exit code 0 from: sw_vers -buildVersion
[ ] 23H124
[ ] executing: uname -m
[ +2 ms] Exit code 0 from: uname -m
[ ] arm64
[ +3 ms] executing: sysctl hw.optional.arm64
[ +4 ms] Exit code 0 from: sysctl hw.optional.arm64
[ ] hw.optional.arm64: 1
[ +57 ms] executing: sysctl hw.optional.arm64
[ +7 ms] Exit code 0 from: sysctl hw.optional.arm64
[ ] hw.optional.arm64: 1
[ ] executing: /usr/bin/arch -arm64e xcrun xcodebuild -version
[ +69 ms] Exit code 0 from: /usr/bin/arch -arm64e xcrun xcodebuild -version
[ ] Xcode 16.2
Build version 16C5032a
[ +1 ms] executing: /usr/bin/arch -arm64e xcrun xcdevice list --timeout 5
[ +2 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +25 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ +8 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +44 ms] Skipping pub get: version match.
[ +27 ms] Found plugin flutter_plugin_android_lifecycle at /Users/filiph/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.24/
[ +8 ms] Found plugin google_maps_flutter at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter-2.10.0/
[ +2 ms] Found plugin google_maps_flutter_android at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_android-2.14.11/
[ +1 ms] Found plugin google_maps_flutter_ios at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_ios-2.13.2/
[ +4 ms] Found plugin google_maps_flutter_web at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_web-0.5.10/
[ +42 ms] Found plugin flutter_plugin_android_lifecycle at /Users/filiph/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.24/
[ +1 ms] Found plugin google_maps_flutter at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter-2.10.0/
[ +1 ms] Found plugin google_maps_flutter_android at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_android-2.14.11/
[ +1 ms] Found plugin google_maps_flutter_ios at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_ios-2.13.2/
[ +1 ms] Found plugin google_maps_flutter_web at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_web-0.5.10/
[ +28 ms] Found plugin flutter_plugin_android_lifecycle at /Users/filiph/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.24/
[ ] Found plugin google_maps_flutter at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter-2.10.0/
[ ] Found plugin google_maps_flutter_android at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_android-2.14.11/
[ ] Found plugin google_maps_flutter_ios at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_ios-2.13.2/
[ +1 ms] Found plugin google_maps_flutter_web at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_web-0.5.10/
[ +11 ms] Generating /Users/filiph/dev/flutter-maps-samples/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.java
[ +173 ms] Launching lib/main.dart on Chrome in debug mode...
[ +51 ms] Initializing file store
[ +5 ms] Skipping target: gen_localizations
[ +1 ms] Skipping target: gen_dart_plugin_registrant
[ ] Skipping target: _composite
[ ] complete
[ ] Updating assets
[ +64 ms] Waiting for connection from debug service on Chrome...
[ +4 ms] Found plugin flutter_plugin_android_lifecycle at /Users/filiph/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.24/
[ ] Found plugin google_maps_flutter at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter-2.10.0/
[ ] Found plugin google_maps_flutter_android at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_android-2.14.11/
[ ] Found plugin google_maps_flutter_ios at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_ios-2.13.2/
[ ] Found plugin google_maps_flutter_web at /Users/filiph/.pub-cache/hosted/pub.dev/google_maps_flutter_web-0.5.10/
[ +17 ms] shaderc command: [/Users/filiph/fvm/versions/stable/bin/cache/artifacts/engine/darwin-x64/impellerc, --sksl, --iplr, --json,
--sl=build/flutter_assets/shaders/ink_sparkle.frag, --spirv=build/flutter_assets/shaders/ink_sparkle.frag.spirv,
--input=/Users/filiph/fvm/versions/stable/packages/flutter/lib/src/material/shaders/ink_sparkle.frag, --input-type=frag,
--include=/Users/filiph/fvm/versions/stable/packages/flutter/lib/src/material/shaders,
--include=/Users/filiph/fvm/versions/stable/bin/cache/artifacts/engine/darwin-x64/shader_lib]
[ +167 ms] <- reset
[ +2 ms] /Users/filiph/fvm/versions/stable/bin/cache/dart-sdk/bin/dartaotruntime
/Users/filiph/fvm/versions/stable/bin/cache/dart-sdk/bin/snapshots/frontend_server_aot.dart.snapshot --sdk-root
/Users/filiph/fvm/versions/stable/bin/cache/flutter_web_sdk/ --incremental --target=dartdevc --experimental-emit-debug-metadata --output-dill
/var/folders/54/z2sqwtn97y1ftg9plxgrqrv00000gn/T/flutter_tools.arQKG8/flutter_tool.zAnCVn/app.dill --packages
/Users/filiph/dev/flutter-maps-samples/.dart_tool/package_config.json -Ddart.vm.profile=false -Ddart.vm.product=false --enable-asserts
--track-widget-creation --filesystem-root /var/folders/54/z2sqwtn97y1ftg9plxgrqrv00000gn/T/flutter_tools.arQKG8/flutter_tools.ouF050
--filesystem-scheme org-dartlang-app --initialize-from-dill build/80b1a4cf4e7b90e1ab5f72022a0bc624.cache.dill.track.dill --platform
file:///Users/filiph/fvm/versions/stable/bin/cache/flutter_web_sdk/kernel/ddc_outline_sound.dill --verbosity=error --sound-null-safety
[ +6 ms] <- compile org-dartlang-app:/web_entrypoint.dart
[+5380 ms] [
{
"simulator" : true,
"operatingSystemVersion" : "18.0 (22A3351)",
"available" : true,
...
[+5090 ms] Waiting for connection from debug service on Chrome... (completed in 10.9s)
[ ] Synced 50.7MB.
[ ] <- accept
[ ] Caching compiled dill
[ +30 ms] Launching Chromium (url = http://localhost:59204, headless = false, skipCheck = false, debugPort = null)
[ ] Will use Chromium executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[ +87 ms] Using Google Chrome 131.0.6778.265
[ +39 ms] executing: sysctl hw.optional.arm64
[ +3 ms] Exit code 0 from: sysctl hw.optional.arm64
[ ] hw.optional.arm64: 1
[ +7 ms] Found ARM Chrome installation at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome, forcing native launch.
[ +758 ms] [CHROME]:
[ +2 ms] [CHROME]: DevTools listening on ws://127.0.0.1:59229/devtools/browser/24ae571b-61a0-436a-89b7-4d3bdcc81e25
[+1100 ms] DwdsInjector: Received request for entrypoint at http://localhost:59204/main_module.bootstrap.js
[ +13 ms] MetadataProvider: Loading debug metadata...
....
[ ] MetadataProvider: Loaded debug metadata for module: packages/flutter_maps_samples/locations.dart
[ ] MetadataProvider: Loaded debug metadata (sound null safety)
[ +12 ms] DwdsInjector: Injected debugging metadata for entrypoint at http://localhost:59204/main_module.bootstrap.js
[+3197 ms] ChromeProxyService: Initializing expression compiler for main_module.bootstrap.js with sound null safety: true
[ +85 ms] ChromeProxyService: Caching package:flutter/src/widgets/widget_inspector.dart in expression compiler worker
[ +90 ms] DevHandler: Debug service listening on ws://127.0.0.1:59258/uG_PFtrhXcM=/ws
[ +6 ms] DevHandler: VmService proxy responded with an error:
{jsonrpc: 2.0, id: 10, error: {code: -32601, message: Method not found, data: {jsonrpc: 2.0, method: _setStreamIncludePrivateMembers,
id: 10, params: {streamId: Stdout, includePrivateMembers: false}}}}
[ +4 ms] This app is linked to the debug service: ws://127.0.0.1:59258/uG_PFtrhXcM=/ws
[ ] DevHandler: VmService proxy responded with an error:
{jsonrpc: 2.0, id: 11, error: {code: -32601, message: Method not found, data: {jsonrpc: 2.0, method: _setStreamIncludePrivateMembers,
id: 11, params: {streamId: Stderr, includePrivateMembers: false}}}}
[ +5 ms] DevHandler: VmService proxy responded with an error:
{jsonrpc: 2.0, id: 13, error: {code: -32601, message: Method not found, data: {jsonrpc: 2.0, method: _setStreamIncludePrivateMembers,
id: 13, params: {streamId: Isolate, includePrivateMembers: false}}}}
[ +1 ms] DevHandler: VmService proxy responded with an error:
{jsonrpc: 2.0, id: 14, error: {code: -32601, message: Method not found, data: {jsonrpc: 2.0, method: _setStreamIncludePrivateMembers,
id: 14, params: {streamId: Extension, includePrivateMembers: false}}}}
[ +2 ms] Debug service listening on ws://127.0.0.1:59258/uG_PFtrhXcM=/ws
[ +5 ms] 🔥 To hot restart changes while running, press "r" or "R".
[ ] For a more detailed help message, press "h". To quit, press "q".
[ ] A Dart VM Service on Chrome is available at: http://127.0.0.1:59258/uG_PFtrhXcM=
[ +132 ms] ExpressionEvaluator: Evaluating "() { return true; }" at packages/flutter/src/widgets/unique_widget.dart
[ +102 ms] ExpressionEvaluator: Evaluating JS: "function () {
try {
return (function() {
const dart_sdk = require('dart_sdk');
const dart_rti = dart_sdk.dart_rti;
const dart = dart_sdk.dart;
return dart.fn(() => true, dart_rti._Universe.eval(dart_rti._theUniverse(), "core|bool()", true));
}(
))();
} catch (error) {
return error.name + ": " + error.message;
}
}" with scope: {}
[ +274 ms] ExpressionEvaluator: Evaluated "() { return true; }" to "{type: boolean, value: true}"
[ +4 ms] The Flutter DevTools debugger and profiler on Chrome is available at: http://127.0.0.1:9106?uri=http://127.0.0.1:59258/uG_PFtrhXcM=
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.7 23H124 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /Users/filiph/fvm/versions/stable
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/filiph/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3.1.1)
• IntelliJ at /Users/filiph/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 83.0.4
• Dart plugin version 243.23177
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.84.0
[✓] Connected device (4 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 14 (API 34) (emulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.7 23H124 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.7 23H124 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
p: maps,platform-web,package,has reproducible steps,team-web,found in release: 3.27,found in release: 3.28
|
low
|
Critical
|
2,786,798,337
|
tauri
|
[bug] MSI installer uses HKCU despite being per-machine installation
|
### Describe the bug
The installer performs what appears to be a per-machine installation:
- Installs to Program Files by default
- Requires admin privileges
- Creates shortcuts in Public Desktop and ProgramData Start Menu (accessible to all users)
However, InstallDir, Desktop Shortcut and other registry keys are created under HKCU
One issue caused by this is in the scenario:
- User A installs the program, registry keys are created in User A's HKCU
- If User B uninstalls the program, the files and public shortcuts are removed
- But User A's HKCU registry entries remain since User B's uninstaller can't access them
- This leaves orphaned registry entries for any user who installed but didn't **un**install the program themselves
Is this intended behavior or should the registry keys be in HKLM since this is a per-machine installation?
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
> tauri info
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 131.0.2903.112
✔ MSVC: Visual Studio Build Tools 2022
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 22.9.0
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.4)
[-] Plugins
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : 2.2.0
- tauri-plugin-dialog 🦀: 2.2.0
- @tauri-apps/plugin-dialog : 2.2.0
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_
|
type: bug,platform: Windows,status: needs triage
|
low
|
Critical
|
2,786,846,838
|
react-native
|
OutOfMemoryError in BlobModule when using fetch + large file [Android]
|
### Description
create a blob from a file url. (A ~400 MB UHD video).
```
const response = await fetch(uri);
const blob = await response.blob();
```
It fails on Android with an underlying OutOfMemory error. Works on iOS.
The underlying [ReactAndroid code](https://github.com/facebook/react-native/blob/f1cbf25c09d69d43675b4341fa55672674a6d2c9/packages/react-native/ReactAndroid/src/main/java/com/facebook/react/modules/blob/BlobModule.java#L227) is inefficient in its buffer management. It should determine the file size and allocate a fixed size buffer rather than using the buffer resize logic in java.io.ByteArrayOutputStream.ensureCapacity. It should also close the InputStream after reading it into the buffer.
### Steps to reproduce
1. Install the application with _yarn android_
2. Attach android logcat to capture the underlying error
3. You need a large video file to reproduce the error. (I used a 440 MB .mp4 file. The reproducer uses a large heap , without large heap the error happens for even smaller files . )
4. Click the 'Choose a large file' button and select a large video file.
### React Native Version
0.76.6
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.4
CPU: (16) arm64 Apple M3 Max
Memory: 231.39 MB / 48.00 GB
Shell:
version: 3.2.57
path: /bin/bash
Binaries:
Node:
version: 20.17.0
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 10.8.3
path: /usr/local/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "23"
- "27"
- "28"
- "29"
- "30"
- "31"
- "32"
- "33"
- "33"
- "33"
- "34"
- "35"
Build Tools:
- 19.1.0
- 20.0.0
- 21.1.2
- 22.0.1
- 23.0.1
- 23.0.2
- 23.0.3
- 24.0.0
- 24.0.1
- 24.0.2
- 24.0.3
- 25.0.0
- 25.0.1
- 25.0.2
- 25.0.3
- 26.0.0
- 26.0.1
- 26.0.2
- 26.0.3
- 27.0.0
- 27.0.1
- 27.0.2
- 27.0.3
- 28.0.0
- 28.0.1
- 28.0.2
- 28.0.3
- 29.0.0
- 29.0.1
- 29.0.2
- 29.0.3
- 30.0.0
- 30.0.1
- 30.0.2
- 30.0.3
- 31.0.0
- 32.0.0
- 32.1.0
- 33.0.0
- 33.0.1
- 33.0.2
- 34.0.0
- 34.0.0
- 34.0.0
- 34.0.0
- 35.0.0
System Images:
- android-30 | Google Play ARM 64 v8a
- android-32 | Google Play ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10811636
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.8.1
path: /usr/bin/javac
Ruby:
version: 3.3.0
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
Failed to send url request: content://media/external/video/media/1000024367
java.lang.OutOfMemoryError: Failed to allocate a 536870928 byte allocation with 100663296 free bytes and 251MB until OOM, target footprint 374277184, growth limit 536870912
at java.util.Arrays.copyOf(Arrays.java:3585)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:120)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:95)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:156)
at com.facebook.react.modules.blob.BlobModule.getBytesFromUri(BlobModule.java:239)
at com.facebook.react.modules.blob.BlobModule.-$$Nest$mgetBytesFromUri(Unknown Source:0)
at com.facebook.react.modules.blob.BlobModule$2.fetch(BlobModule.java:83)
at com.facebook.react.modules.network.NetworkingModule.sendRequestInternal(NetworkingModule.java:282)
at com.facebook.react.modules.network.NetworkingModule.sendRequest(NetworkingModule.java:243)
at com.facebook.jni.NativeRunnable.run(Native Method)
at android.os.Handler.handleCallback(Handler.java:991)
at android.os.Handler.dispatchMessage(Handler.java:102)
at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:27)
at android.os.Looper.loopOnce(Looper.java:232)
at android.os.Looper.loop(Looper.java:317)
at com.facebook.react.bridge.queue.MessageQueueThreadImpl.lambda$startNewBackgroundThread$2(MessageQueueThreadImpl.java:217)
at com.facebook.react.bridge.queue.MessageQueueThreadImpl$$ExternalSyntheticLambda1.run(D8$$SyntheticClass:0)
at java.lang.Thread.run(Thread.java:1012)
```
### Reproducer
https://github.com/giantslogik/blob-large-file-fetch
### Screenshots and Videos
_No response_
|
🌐Networking,Platform: Android,Needs: Triage :mag:
|
low
|
Critical
|
2,786,928,425
|
flutter
|
CupertinoButton's child text does not apply primaryContrastingColor anymore
|
### Steps to reproduce
1. Create a CupertinoApp with default values.
2. Place a CupertinoButton with a custom background color, for example: CupertinoColors.systemPurple.
3. Check out the text's color.
### Expected results
The text on the button should use primaryContrastingColor, its white by default
### Actual results
The text of the `CupertinoButton` does not apply `primaryContrastingColor` and instead uses `primaryColor`.
**Why it should be fixed:**
1. This is a breaking change and is not easy to address in a production app after the Flutter 3.27 upgrade.
2. There is no `color:` argument in the `CupertinoButton.filled` constructor, and it only applies `primaryColor`.
3. Consequently, there is no straightforward way to create a purple `CupertinoButton` with white text.
**How it can be fixed:**
- Restore the default behavior for `CupertinoButton`.
- Alternatively, add a `color` argument to the `CupertinoButton.filled` constructor.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return CupertinoApp(
home: MyHomePage(title: 'Test'),
supportedLocales: const [Locale('en')],
localizationsDelegates: const [
DefaultMaterialLocalizations.delegate,
DefaultCupertinoLocalizations.delegate,
DefaultWidgetsLocalizations.delegate,
],
locale: const Locale('en'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return CupertinoPageScaffold(
child: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
CupertinoButton(
color: CupertinoColors.systemPurple,
child: Text('Click Me'),
onPressed: () {
print('Test');
},
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Flutter 3.24 screenshot</summary>

</details>
<details open>
<summary>Flutter 3.27 screenshot</summary>

</details>
### Logs
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.1.1 24B91 darwin-arm64, locale en-RU)
• Flutter version 3.27.1 on channel stable at /Users/jack/Applications/fvm/versions/3.27.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/me/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.0
[✓] Android Studio (version 2023.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314)
[✓] VS Code (version 1.96.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• iPhone 13 mini 18.1 (mobile) • 011A6845-5B18-4765-AEDA-216A8A873621 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
framework,f: cupertino,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28
|
low
|
Minor
|
2,786,932,144
|
ant-design
|
Segmented 组件在暗黑模式下的动画过渡效果不自然
|
### What problem does this feature solve?

如图,在暗黑模式下当相邻两个选项切换时会有这种与 hover 背景重叠的效果,默认浅色主题下就没这种问题
### What does the proposed API look like?
跟浅色模式一样动画过渡自然
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
improvement
|
low
|
Minor
|
2,786,970,908
|
kubernetes
|
kube-controller-manager restart when leaderelection lost
|
### What happened?
After the kube-controller-manager component fails to renew the contract, the kube-controller-manager component directly exits the process. Can the kube-controller-manager component be selected as the primary component without restarting the process?
### What did you expect to happen?
The kube-controller-manager process does not exit after the renewal fails.
### How can we reproduce it (as minimally and precisely as possible)?
Construct a network problem to make the renewal of the primary node fail.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.31
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
kind/bug,needs-sig,needs-triage
|
low
|
Minor
|
2,787,005,906
|
PowerToys
|
PowerToys Bug while it was in no use
|
### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
No way I can find how to reproduce it, an error window just popped up, after some almost crashing state after changing programs windows (switching to another program in the computer), then my second monitor stopped being recognized and I needed to restart the PC.
I wasn't even using PowerToys directly... just running in the background.
Log archives attached.
[PowerToysReport_2025-01-14-09-15-46.zip](https://github.com/user-attachments/files/18410243/PowerToysReport_2025-01-14-09-15-46.zip)
### ✔️ Expected Behavior
At the moment none, the app was just running in the background.
### ❌ Actual Behavior
_No response_
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,787,034,486
|
deno
|
Next.js build sometimes fails with "Cannot find module" error
|
The error looks like this:
```
Error: Cannot find module './chunks/264.js'
Require stack:
- /tmp/build/src/.next/server/webpack-runtime.js
- /tmp/build/src/.next/server/pages/_document.js
- /tmp/build/src/node_modules/.deno/next@15.2.0-canary.8/node_modules/next/dist/server/require.js
- /tmp/build/src/node_modules/.deno/next@15.2.0-canary.8/node_modules/next/dist/server/load-components.js
- /tmp/build/src/node_modules/.deno/next@15.2.0-canary.8/node_modules/next/dist/build/utils.js
- /tmp/build/src/node_modules/.deno/next@15.2.0-canary.8/node_modules/next/dist/build/worker.js
- /tmp/build/src/node_modules/.deno/next@15.2.0-canary.8/node_modules/next/dist/compiled/jest-worker/processChild.js
at t.f.require (.next/server/webpack-runtime.js:1:1637)
at <unknown> (.next/server/webpack-runtime.js:1:1110)
at t.e (.next/server/webpack-runtime.js:1:1089) {
code: "MODULE_NOT_FOUND",
requireStack: [Array]
}
```
It usually gets fixed by cleaning `node_modules/` *and* removing the `DENO_DIR` (path can be found in `deno info`).
I have absolutely no idea what causes this, nor how to reproduce it reliably.
|
nextjs
|
low
|
Critical
|
2,787,035,822
|
flutter
|
Incorrect position of Japanese predictive conversion popup in TextFormField using maxLines on the Web
|
### Steps to reproduce
1. Prepare a Mac with macOS Sequoia 15.2 (Japanese).
2. Paste the code sample into [dartpad.dev](https://dartpad.dev/) and run it.
3. Type Japanese in TextFormField and display the predictive conversion popup.
### Expected results
The predictive conversion popup will appear without any redundant space under the text.
### Actual results
The predictive conversion popup is appeared with redundant space under the text.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
debugShowCheckedModeBanner: false,
theme: ThemeData(
colorSchemeSeed: Colors.blue,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
final String title;
const MyHomePage({
super.key,
required this.title,
});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: TextFormField(
maxLines: 10,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img src="https://github.com/user-attachments/assets/a769542a-c7a4-461f-8cd7-e3b36b63bd19" />
</details>
### Logs
<details open><summary>Logs</summary>
None.
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
None. (Run on dartpad.dev with Based on Dart SDK 3.6.0 and Flutter SDK 3.27.1.)
</details>
|
a: text input,framework,a: internationalization,platform-web,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.27,found in release: 3.28
|
low
|
Critical
|
2,787,063,564
|
ollama
|
Windows Installer hangs at the end of install
|
### What is the issue?
Hi,
I've encountered a bug while upgrading ollama that also occurs when installing or trying to uninstall.
At the very end of the process, the windows will become disabled (can't move it, can't put in in foreground), witht he icon being higlighted in red on the taskbar, like if a modal was in the way.
Looking for the logs showned nothing. So I looked in the call stack of the install process, and found out that overwolf was causing the hang.
Killing overwolf from the tray icon immediately finishes the install.
Looking at logs, the installation is actually finished, overwolf seems just to be preventing the auto closing of the installer window.

```
2025-01-14 13:42:40.680 Deleting uninstall key left over from previous non administrative install.
2025-01-14 13:42:40.680 Creating new uninstall key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall\{44E83376-CE68-45EB-8FC1-393500EB558C}_is1
2025-01-14 13:42:40.680 Writing uninstall key values.
2025-01-14 13:42:40.680 Detected previous administrative 64-bit install? No
2025-01-14 13:42:40.680 Detected previous administrative 32-bit install? No
2025-01-14 13:42:40.684 Installation process succeeded.
```
I've had help on this on the discord, but as the problem is quite unusual, I've tough it could be interesting to report the issue (that might be on overwolf side) to at least propose the very simple solution on the FAQ.
Let me know if I can provide more logs or information, I'd be glad to help.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.5
|
bug
|
low
|
Critical
|
2,787,103,502
|
stable-diffusion-webui
|
[Bug]: Bad API auth with certain passwords
|
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
```
Jan 14 13:14:03 roxanne.dragonfear stable-diffusion-webui[1682746]: ValueError: too many values to unpack (expected 2)
```
The routine in api.py is using auth.split(":")
It should be using auth.partition(":") to avoid double-splits.
### Steps to reproduce the problem
Just run the thing with an API password that contains colons.
### What should have happened?
Should start normally.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
Doesn't matter. It's a code bug.
### Console logs
```Shell
Above.
```
### Additional information
_No response_
|
bug-report
|
low
|
Critical
|
2,787,115,508
|
TypeScript
|
`builder.getDeclarationDiagnostics` performance regression since 5.6
|
### 🔎 Search Terms
Starting with version 5.6, the `builder.getDeclarationDiagnostics` method has become slower when invoked to retrieve diagnostics for a single `SourceFile`.
Here are the timing comparisons:
**TypeScript Version: 5.5.2**
- `builder.getDeclarationDiagnostics()`: **2.247s**
- `builder.getDeclarationDiagnostics(builder.getSourceFiles()[0])`: **0.19ms**
- `builder.getSourceFiles().forEach(sf => builder.getDeclarationDiagnostics(sf))`: **1.655s**
**TypeScript Version: 5.7.2**
- `builder.getDeclarationDiagnostics()`: **1.675s**
- `builder.getDeclarationDiagnostics(builder.getSourceFiles()[0])`: **1.791s**
- `builder.getSourceFiles().forEach(sf => builder.getDeclarationDiagnostics(sf))`: **2.824s**
### 🕗 Version & Regression Information
- This changed between versions 5.5 and 5.6
- This changed in commit or PR https://github.com/microsoft/TypeScript/pull/59065
### 💻 Code
https://github.com/alan-agius4/ts-getDeclarationDiagnostics
### 🙁 Actual behavior
Regression in performance
### 🙂 Expected behavior
Similar performance
### Additional information about the issue
_No response_
|
Needs Investigation
|
low
|
Major
|
2,787,116,537
|
rust
|
Tracking issue for release notes of #133807: ci: Enable opt-dist for dist-aarch64-linux builds
|
This issue tracks the release notes text for #133807.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Internal Changes
- [ci: Enable opt-dist for dist-aarch64-linux builds](https://github.com/rust-lang/rust/pull/133807)
The ARM 64-bit compiler (AArch64) on Linux is now optimized with ThinLTO and PGO, similar to the optimizations we have already performed for the x86-64 compiler on Linux. This should make it up to 30% faster.
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @mrkajetanp, @Kobzol -- origin issue/PR authors and assignees for starting to draft text
|
T-compiler,relnotes,T-infra,relnotes-tracking-issue
|
low
|
Minor
|
2,787,117,757
|
vscode
|
VS Code workspaces on dynamic set of subfolders
|
Type: <b>Feature Request</b>
Hi,
I am developing in a big monorepo with hundreds of projects. I would like to create a code-workspace file that treats each folder of a subfolder as its own project.
Like
- packages/project1
- packages/project2
- ...
- packages/project500
Currently, code-workspace files need to list every project explicitly.
I would like to be able to specify a subfolder and have it dynamically added. So instead of
```
{
"folders": [
{
"path": "packages/project1"
},
{
"path": "packages/project500"
},
]
}
```
I would like:
```
{
"rootFolders": [
{
path: "packages"
}
]
}
```
and VS Code should dynamically show each subfolder of packages as a project.
VS Code version: Code - Insiders 1.97.0-insider (c594d55bae90276d174cea4ddf2901694d4ebb3e, 2025-01-14T05:04:07.371Z)
OS version: Windows_NT x64 10.0.26120
Modes:
Remote OS version: Linux x64 6.5.0-1025-azure
<!-- generated by issue reporter -->
|
feature-request,workbench-multiroot
|
low
|
Minor
|
2,787,131,386
|
deno
|
node:dns resolver ignores TTL option
|
Version: Deno 2.1.5
```ts
import {resolve4} from "node:dns/promises";
resolve4("deno.com", {ttl: true}).then(console.log);
```
Expected result:
```
[{ address: '34.120.54.55', ttl: 12345 }]
```
Actual result:
```
[ "34.120.54.55" ]
```
Deno docs are aware of possible object being returned: https://docs.deno.com/api/node/dns/promises/~/resolve4
Node docs specify the behaviour for `ttl: true`: https://nodejs.org/api/dns.html#dnspromisesresolve4hostname-options
It makes `cacheable-lookup` fail because the library sets the option and expects the object to be returned:
```
error: Uncaught TypeError: Cannot create property 'family' on string '34.120.54.55'
at CacheableLookup._resolve (file:///my_home/node_modules/cacheable-lookup/source/index.js:264:20)
at eventLoopTick (ext:core/01_core.js:175:7)
at async CacheableLookup.queryAndCache (file:///my_home/node_modules/cacheable-lookup/source/index.js:347:17)
at async CacheableLookup.query (file:///my_home/node_modules/cacheable-lookup/source/index.js:235:20)
at async CacheableLookup.lookupAsync (file:///my_home/node_modules/cacheable-lookup/source/index.js:178:18)
```
|
good first issue,node compat
|
low
|
Critical
|
2,787,143,326
|
vscode
|
sign in problem for copilot
|
Type: <b>Bug</b>
sign in button click but not respons
VS Code version: Code 1.96.3 (91fbdddc47bc9c09064bf7acf133d22631cbf083, 2025-01-09T18:14:09.060Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-12500H (16 x 3110)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.70GB (5.74GB free)|
|Process Argv|--crash-reporter-id 63da4b5a-e8c7-4350-a2e9-217cdaf5bcd7|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (17)</summary>
Extension|Author (truncated)|Version
---|---|---
java-run|cao|1.1.4
vscode-javac|geo|0.2.46
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
jupyter-keymap|ms-|1.1.2
java|red|1.38.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt551:31179978
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter -->
https://github.com/user-attachments/assets/fcde6bc5-2c57-4e12-b079-ed4b00dd30ed
|
info-needed
|
low
|
Critical
|
2,787,163,328
|
flutter
|
Render GPU back buffer into Flutter control
|
### Use case
I'd like to be able to render an existing buffer from the GPU into a Flutter control. The buffer should remain on the GPU with no GPU -> CPU or CPU -> GPU copying.
### Proposal
I'd like to be able to render an existing buffer from the GPU into a Flutter control. The buffer should remain on the GPU with no GPU -> CPU or CPU -> GPU copying.
|
waiting for customer response,in triage
|
low
|
Minor
|
2,787,193,050
|
pytorch
|
Matmul with int32 parameters on Intel GPU leads to errors
|
### 🐛 Describe the bug
torch.matmul with int32 parameters leads to errors, when running on XPU (Intel GPU) in the following program.
```python
import numpy as np
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.val = torch.nn.Parameter(torch.ones([1], dtype=torch.int32), requires_grad=False)
def forward(self, *args):
val = self.val
out = torch.matmul(val, args[0])
return (out)
m = Model()
inp = [np.ones([1,1], np.int32)]
m.to('cpu')
output1 = m(*[torch.from_numpy(v).to('cpu') for v in inp])
print(output1)
m.to('xpu')
output2 = m(*[torch.from_numpy(v).to('xpu') for v in inp])
print(output2)
````
### **Error Logs**
```bash
tensor([1], dtype=torch.int32)
Traceback (most recent call last):
File "/xxx/test.py", line 23, in <module>
output2 = m(*[torch.from_numpy(v).to('xpu') for v in inp])
File "/home/xxx/anaconda3/envs/intel-gpu-pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxx/anaconda3/envs/intel-gpu-pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/xxx/test.py", line 11, in forward
out = torch.matmul(val, args[0])
RuntimeError: could not create a primitive descriptor for a matmul primitive
```
### Versions
PyTorch version: 2.5.1+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtop
ology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_sin
gle ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241211
[pip3] pytorch-triton-xpu==3.1.0
[pip3] torch==2.5.1+xpu
[pip3] torchaudio==2.5.1+xpu
[pip3] torchvision==0.20.1+xpu
[conda] numpy 2.1.3 pypi_0 pypi
[conda] pytorch-triton-xpu 3.1.0 pypi_0 pypi
[conda] torch 2.5.1+xpu pypi_0 pypi
[conda] torchaudio 2.5.1+xpu pypi_0 pypi
[conda] torchvision 0.20.1+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
|
triaged,module: xpu
|
low
|
Critical
|
2,787,231,675
|
ollama
|
Support for llamaindex/vdr-2b-multi-v1: Multilingual Visual Document Retrieval Model
|
vdr-2b-multi-v1 is a cutting-edge multilingual embedding model designed for visual document retrieval across various languages and domains. The model encodes document page screenshots into dense single-vector representations, allowing efficient search and querying of visually rich multilingual documents without OCR or data extraction pipelines.
https://huggingface.co/llamaindex/vdr-2b-multi-v1
https://huggingface.co/blog/vdr-2b-multilingual
Highlights:
- Multilingual Training: Trained on Italian, Spanish, English, French, and German, forming a dataset of 500k high-quality samples.
- Low VRAM and Faster Inference: 3x faster inference with only 30% of the image tokens used by its base model.
- Cross-Lingual Retrieval: Search German documents using Italian queries with superior accuracy.
- Matryoshka Representation Learning (MRL): Enables dimensional reduction while maintaining embedding quality, optimizing both retrieval speed and storage.
Why Include This Model?
- Multilingual Applications: Especially beneficial for regions like Europe, where multilingual documents are prevalent.
- Performance and Efficiency: Outperforms previous benchmarks in terms of speed, memory efficiency, and retrieval accuracy.
- Open Source Contributions: Accompanied by the largest open-source multilingual dataset for visual document retrieval (vdr-multilingual-train).
|
model request
|
low
|
Major
|
2,787,236,868
|
pytorch
|
Unable to build with ATEN_THREADING=TBB option
|
While the doc [here](https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html) says we could set the `ATEN_THREADING` build option to TBB, I encountered the following error:
```
<- omitted previous log for brevity ->
-- Looking for backtrace
-- Looking for backtrace - found
-- backtrace facility detected in default set of libraries
-- Found Backtrace: /usr/include
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Success
-- Using ATen parallel backend: TBB
CMake Error at caffe2/CMakeLists.txt:38 (message):
Unknown ATen parallel backend: TBB
```
Seems the make file does not support this option yet?
https://github.com/pytorch/pytorch/blob/95b41d2aa43c606d65e127d4825c08baf9fcacd9/caffe2/CMakeLists.txt#L38
cc @malfet @seemethere @svekars @brycebortree @sekyondaMeta @AlannaBurke
|
module: build,module: docs,triaged,module: tbb
|
low
|
Critical
|
2,787,255,507
|
flutter
|
TextEditingValue selection default value doesn't work as expected.
|
### Steps to reproduce
1) Create a `TextEditingController` on a `TextFormField` with the selection parameter being `selection: const TextSelection.collapsed(offset: -1)`
2) Enter a text in the text field.
NB: I'm using bloc for state management but that shouldn't have any effect on this ticket. You can reproduce with set state.
### Expected results
I expect no text selection to occur, to achieve this behaviour I have to use `offset: state.atSign.length`
### Actual results
My text is being replaced every time I enter a new text as shown in the video.
### Code sample
<details open><summary>Actual Results Code</summary>
```dart
// the default selection shown for clarity
controller.value =
TextEditingValue(text: state.atSign, selection: const TextSelection.collapsed(offset: -1));
return TextFormField(
controller: controller,
onChanged: (atsign) {
if (!atsign.startsWith('@')) {
atsign = '@$atsign';
}
context.read<OnboardingCubit>().setState(
atSign: atsign,
);
}
)
```
</details>
<details open><summary>Expected Results Code</summary>
```dart
controller.value =
TextEditingValue(text: state.atSign, selection: TextSelection.collapsed(offset: state.atSign.length));
return TextFormField(
controller: controller,
onChanged: (atsign) {
if (!atsign.startsWith('@')) {
atsign = '@$atsign';
}
context.read<OnboardingCubit>().setState(
atSign: atsign,
rootDomain: widget.options[atsign]?.rootDomain,
);
},
)
```
</details>
### Screenshots or Video
<details open>
<summary>Actual Results Video</summary>
[Upload media here](https://github.com/user-attachments/assets/f008cbd0-7cdd-4aec-a4af-f688d840fe2e)
</details>
<details open>
<summary>Expected Results Video</summary>
[Upload media here](https://github.com/user-attachments/assets/eb80c34b-7d03-481f-b44d-fe0bd50a2034)
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Running flutter doctor...
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.27.2, on macOS 14.6.1 23G93 darwin-arm64, locale en-GY)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.96.3)
[✓] Connected device (3 available)
[✓] Network resources
• No issues found!
```
</details>
|
waiting for customer response,in triage
|
low
|
Major
|
2,787,256,764
|
vscode
|
VSCode Insiders (1.97.0) Powershell shell (terminal) Integration Issue
|
Type: <b>Bug</b>
I'm using Miniconda to manage my Python Virtual Environments, each time i open a Terminal (powershell), i have junk ansi text written after the virtual environnement selected. It has begun some days ago.
I have reinstalled VSCode insiders, and all the extensions (from bare metal), and same issue each time. Same issue on all of my virtual env
Step to reproduce :
1. Create an conda vistual environment
2. Open a Python project (folder)
3. Select the virtual environment
4. Open a powershell Terminal (CTRL+SHIFT+P) from the palette (Create new terminal)
5. Terminal is opened with the right Virtual Environment selected
6. Hit Enter (or any command)
7. Junk (it seems it comes from some profile info) is displayed between the (Virtual Env) and the prompt >
Exemple of Junk Data (anonymized) :
(Sandbox) rs\x5c\x5c[myUSer]\x5c\x5cOneDrive - [myCompany]\x5c\x5cDocuments\x5c\x5cWindowsPowerShell\x5c\x5cModules\x3bC:\x5c\x5cProgram Files\x5c\x5cWindowsPowerShell\x5c\x5cModules\x3bC:\x5c\x5cWINDOWS\x5c\x5csystem32\x5c\x5cWindowsPowerShell\x5c\x5cv1.0\x5c\x5cModules","SSL_CERT_DIR":"C:\x5c\x5cUsers\x5c\x5c[myUSer]\x5c\x5cAppData\x5c\x5cLocal\x5c\x5cminiconda3\x5c\x5cenvs\x5c\x5cSandbox\x5c\x5cLibrary\x5c\x5cssl\x5c\x5ccerts","SNC_LIB_64":"C:\x5c\x5cProgram Files\x5c\x5cSAP\x5c\x5cFrontEnd\x5c\x5cSecureLogin\x5c\x5clib\x5c\x5csapcrypto.dll","CONDA_PREFIX_2":"C:\x5c\x5cUsers\x5c\x5c[myUSer]\x5c\x5cAppData\x5c\x5cLocal\x5c\x5cminiconda3\x5c\x5cenvs\x5c\x5cSandbox","ProgramFiles(x86)":"C:\x5c\x5cProgram Files (x86)","HOMEDRIVE":"C:","OS":"Windows_NT","PUBLIC":"C:\x5c\x5cUsers\x5c\x5cPublic","OneDrive":"C:\x5c\x5cUsers\x5c\x5c[myUSer]\x5c\x5cOneDrive - [myCompany]","VSCODE_GIT_IPC_HANDLE":"\x5c\x5c\x5c\x5c.\x5c\x5cpipe\x5c\x5cvscode-git-ca0238292a-sock","VSCODE_GIT_ASKPASS_EXTRA_ARGS":""};dba6be80-468f-423d-90bf-3bb14c89bd5bPS L:\TFS\PY_FOS\exportPRODEC>
=> Also i can add that with the same Virtual Env, i don't have any problem with regular VSCode
=> No problem if i open a cmd terminal
Thanks for your help
VS Code version: Code - Insiders 1.97.0-insider (c594d55bae90276d174cea4ddf2901694d4ebb3e, 2025-01-14T05:04:07.371Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-1365U (12 x 2688)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.66GB (14.75GB free)|
|Process Argv|--crash-reporter-id e67fee66-9416-42f3-ab8b-fac7b6887731|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (55)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-sqlite|ale|0.14.1
arepl|alm|3.0.0
pythonsnippets|frh|1.0.2
copilot|Git|1.257.1312
copilot-chat|Git|0.24.2025011401
todo-tree|Gru|0.0.226
flux|inf|1.0.4
vsc-python-indent|Kev|1.19.0
tal|KNo|0.7.11
csvtomarkdown|mar|0.0.1
rainbow-csv|mec|3.14.0
git-graph|mhu|1.30.0
csdevkit|ms-|1.15.34
csharp|ms-|2.61.28
vscode-dotnet-runtime|ms-|2.2.4
data-workspace-vscode|ms-|0.5.0
mssql|ms-|1.27.0
sql-bindings-vscode|ms-|0.4.0
sql-database-projects-vscode|ms-|1.4.5
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.14.0
isort|ms-|2023.10.1
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
datawrangler|ms-|1.16.0
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
powershell|ms-|2024.5.2
remote-explorer|ms-|0.4.3
vscode-copilot-vision|ms-|0.2.2024111316
vscode-speech|ms-|0.12.1
indent-rainbow|ode|8.3.1
tmlanguage|ped|1.0.0
csv-to-table|php|1.4.1
vscode-thunder-client|ran|2.33.2
java|red|1.38.0
vscode-xml|red|0.27.2
vscode-yaml|red|1.15.0
LiveServer|rit|5.7.9
sonarlint-vscode|Son|4.14.1
pdf|tom|1.2.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
vscode-icons|vsc|12.10.0
markdown-all-in-one|yzh|3.6.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
2i9eh265:30646982
962ge761:30841072
pythonnoceb:30776497
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
dvdeprecation:31040973
dwnewjupytercf:31046870
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
c3hdf307:31184662
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter -->
|
bug
|
low
|
Critical
|
2,787,280,299
|
pytorch
|
Is it possible to remove NCCL submodule and use only nccl binaries from pypi instead ?
|
### 🐛 Describe the bug
Currently we do both we have submodule:
https://github.com/pytorch/pytorch/tree/main/third_party/nccl
And we use pypi nccl binaries:
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L62
And we have a code to check if submodule version is consistent with pypi version, here:
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L434
We also build latest nccl from source here:
https://github.com/pytorch/pytorch/blob/main/.ci/docker/common/install_cuda.sh#L74
This prevents us to have different nccl binaries for different CUDA builds. For instance latest nccl as of Jan 14 is [2.24.3 ](https://pypi.org/project/nvidia-nccl-cu12/2.24.3/) however we are still using 2.21.5 since its compatible with the CUDA 11.8.
We would prefer to keep nccl 2.21.5 for CUDA 11.8 builds but for CUDA 12.4 and 12.6 move to a newer nccl version
Hence a question what nccl submodule is used for and can we remove it and relay only on binaries ?
cc @malfet @seemethere @ptrblck @msaroufim @eqy @albanD @kwen2501
### Versions
2.7
|
module: build,module: cuda,triaged,module: nccl
|
low
|
Critical
|
2,787,298,410
|
pytorch
|
[ARM] multiple test failures in TestQuantizedConv on Aarch64
|
### 🐛 Describe the bug
After enabling 'test_quantization" we consistently see 2 test failures on all Aarch64 platforms.
**TestQuantizedConv.test_qconv2d_relu** and **TestQuantizedConv.test_qconv2d**
```
The failure output is
AssertionError:
Arrays are not almost equal to 0 decimals
X: tensor([[[[0.0000, 0.0000, 2.4028, ..., 0.0000, 0.0000, 3.6042],
... contd..
size=(1, 54, 10, 7), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=1.2014017519687867,
zero_point=0), W: tensor([[[[ 0.0000]],
.... contd...
quantization_scheme=torch.per_tensor_affine, scale=0.32774215094962955,
zero_point=0), b: None, strides: (1, 1),
pads: (0, 0), o_pads: None, dilations: (1, 1),
groups: 27, y_s: 4.200000005809841, y_zp: 0
Mismatched elements: 23 / 9450 (0.243%)
Max absolute difference: 255
Max relative difference: 255.
x: array([[[[0, 0, 0, ..., 0, 1, 1],
[0, 1, 0, ..., 0, 1, 0],
[0, 0, 0, ..., 1, 1, 0],...
y: array([[[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],...
To execute this test, run the following from the base repo dir:
python test/quantization/core/test_quantized_op.py TestQuantizedConv.test_qconv2d
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
The assertion is here
https://github.com/pytorch/pytorch/blob/main/test/quantization/core/test_quantized_op.py#L5182
These tests use hypothesis library and the error doesn't happen for all input combinations, but below is an example of some inputs to test_qconv2d which cause this issue.
```
W_scale=[1.3],
W_zero_point=[0],
X_scale=1.2,
X_zero_point=0,
Y_scale=1,
Y_zero_point=0,
batch_size=1,
dilation=1,
groups=27,
height=10,
input_channels_per_group=2,
kernel_h=1,
kernel_w=1,
output_channels_per_group=5,
pad_h=0,
pad_w=0,
stride_h=1,
stride_w=1,
width=7):
```
This makes it difficult to reproduce since hypothesis fixes the seed only when CI=1. Also, the line above
`python test/quantization/core/test_quantized_op.py TestQuantizedConv.test_qconv2d` doesn't work, as 'test_quantized_op.py' doesn't have `an if __name__ == "__main__" `entrypoint.
I have traced the source of the bug to ideep/oneDNN, but raising this issue here as a reference and to keep track of the fix.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git2e42be0
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git2e42be0
[conda] No relevant packages
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @mruberry @ZainRizvi @malfet @snadampal @milpuz01
|
oncall: quantization,module: tests,module: arm
|
low
|
Critical
|
2,787,301,551
|
opencv
|
add support for multi-channel (>4) TIFs.
|
### System Information
[DaVinci-Resolve-Logo.tif.zip](https://github.com/user-attachments/files/18261560/DaVinci-Resolve-Logo.tif.zip)
```
[INFO] Importing image...
[ERROR:0@0.014] global loadsave.cpp:440 imread_ imread_('/Users/fractale/Downloads/DaVinci-Resolve-Logo.tif'): can't read header: OpenCV(4.9.0) /Users/fractale/.conan2/p/b/openc62c1fdd986bb3/b/src/modules/imgcodecs/src/grfmt_tiff.cpp:150: error: (-2:Unspecified error) in function 'int cv::TiffDecoder::normalizeChannelsNumber(int) const'
> Unsupported number of channels:
> 'channels >= 1 && channels <= 4'
> where
> 'channels' is 5
[ERROR] Could not open input image file: /Users/fractale/Downloads/DaVinci-Resolve-Logo.tif
```
OpenCV doesn't support multi-channel (>4) TIFs.
|
bug,category: imgproc
|
low
|
Critical
|
2,787,312,688
|
rust
|
Overload `visit_qpath` instead of using `visit_pat` in rustdoc "jump to def" implementation
|
It's a follow-up of [this discussion](https://github.com/rust-lang/rust/pull/134216#discussion_r1882612241) where @fmease suggested an improvement to the code.
Assigning myself to do it. :)
|
C-cleanup,T-rustdoc
|
low
|
Minor
|
2,787,399,110
|
PowerToys
|
FancyZones Error
|
### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
FancyZones Editor
### Steps to reproduce
Try and resize the fancyzones


### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,787,401,399
|
go
|
cmd/compile: intrinsify cmp.Compare on common types such as strings
|
https://github.com/golang/go/issues/61725 optimized `strings.Compare`, which is great, but it did not optimize `cmp.Compare[string]`, which is otherwise equivalent.
This is leading to users learning that they should avoid `cmp.Compare[string]` for the sake of performance, even writing linters for it like https://github.com/tklauser/lintcomparestrings, which in my opinion is really unfortunate. For the same reason, I find changes like https://go-review.googlesource.com/c/go/+/642038 unfortunate and unnecessary.
The compiler should be clever enough to optimize the generic `cmp.Compare` function just as well as specialized functions such as `strings.Compare` or any others that might exist for common comparable types such as integers. Then the developers don't have to remember facts about which one of them is faster.
Personally, I also find it pretty nice to consistently use `cmp.Compare`. Needing to mix different compare functions in an expression for the sake of performance is a bit odd.
|
Performance,NeedsInvestigation,compiler/runtime
|
medium
|
Major
|
2,787,480,452
|
kubernetes
|
eviction_manager does not attempt to cleanup unused images before evicting pods (with imagefs)
|
### What happened?
TL;DR: One of our nodes being out of ephemeral-storage (from imagefs being beyond the hard eviction threshold) led to a pod eviction, but the eviction_manager did not try to reclaim unused images before attempting eviction. This happens shortly after the pod is setup / started by the kubelet.
The metrics for that node (image below) point to imageFs filling up (and not nodefs).
We also had `Threshold quantity: 4668496467, available: 4527052Ki` in the eviction event, which match (roughly) `size of our imagefs * imagefs.available threshold` ) ,
What seems surprising is that the kubelet apparently does not attempt to reclaim unused images
If I'm following correctly the code flow here (all the code link are to release-2.29, since it's where I hit this) https://github.com/kubernetes/kubernetes/blob/86e25a07e279516eb13af436ca5706b1806ea604/pkg/kubelet/eviction/eviction_manager.go#L456-L459 and here https://github.com/kubernetes/kubernetes/blob/86e25a07e279516eb13af436ca5706b1806ea604/pkg/kubelet/eviction/helpers.go#L1201-L1210
we should always hit the following code when hitting signalImageFSAvailable https://github.com/kubernetes/kubernetes/blob/86e25a07e279516eb13af436ca5706b1806ea604/pkg/kubelet/images/image_gc_manager.go#L390-L394
However, this isn't always the case: see the logs below: in the first one, this does happen as it should, and no eviction occured. In the second one the image GC is not called and eviction happens:
Image GC, no eviction
```
janv. 04 06:11:53 qpk8s-node-033 kubelet[26615]: I0104 06:11:53.500246 26615 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="REDACTED-dev/cyclades-dev1-job-db-w74bj"
janv. 04 06:11:53 qpk8s-node-033 kubelet[26615]: I0104 06:11:53.867218 26615 kubelet.go:2465] "SyncLoop (PLEG): event for pod" pod="REDACTED-dev/REDACTED-dev1-job-db-w74bj" event={"ID":"9e7
7962a-9ef4-4003-a620-8fc9f99a65de","Type":"ContainerStarted","Data":"fed98b0d4a5008c9d4e3813afb0477c4bec6d81c9ec9ca1c5f359fd00bc08687"}
janv. 04 06:12:08 qpk8s-node-033 kubelet[26615]: I0104 06:12:08.959726 26615 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage"
janv. 04 06:12:08 qpk8s-node-033 kubelet[26615]: I0104 06:12:08.960139 26615 container_gc.go:88] "Attempting to delete unused containers"
janv. 04 06:12:08 qpk8s-node-033 kubelet[26615]: I0104 06:12:08.960699 26615 scope.go:117] "RemoveContainer" containerID="74b79d8dd06301f2c806ba5227797fb93fe38b96dc33caad4803f74d6440f1b8"
janv. 04 06:12:08 qpk8s-node-033 kubelet[26615]: I0104 06:12:08.967179 26615 scope.go:117] "RemoveContainer" containerID="231f370f8d36173c94c9560baee42bf1fe108573ff322a1c4f542233671078db"
janv. 04 06:12:08 qpk8s-node-033 kubelet[26615]: I0104 06:12:08.971757 26615 scope.go:117] "RemoveContainer" containerID="893a23f56984eb2f53dcec21d138ad84a93969dcabf6e72b5a270f8e45351e52"
janv. 04 06:12:09 qpk8s-node-033 kubelet[26615]: I0104 06:12:09.636342 26615 image_gc_manager.go:391] "Attempting to delete unused images"
janv. 04 06:12:09 qpk8s-node-033 kubelet[26615]: I0104 06:12:09.638685 26615 image_gc_manager.go:447] "Removing image to free bytes" imageID="sha256:cc42097299cf96c7b08e31669dddb2af764381fd
d167cfaf25f94f3989a22e6e" size=853261112 runtimeHandler=""
janv. 04 06:12:09 qpk8s-node-033 kubelet[26615]: I0104 06:12:09.853135 26615 image_gc_manager.go:447] "Removing image to free bytes" imageID="sha256:78abc00a071160e2902c30cea3350f7dc1250b67
3410231f989103d22bd4e9cb" size=717064073 runtimeHandler=""
<multiples lines of the above>
janv. 04 06:12:15 qpk8s-node-033 kubelet[26615]: I0104 06:12:15.928391 26615 eviction_manager.go:373] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage"
```
No image GC, eviction
```
janv. 06 12:43:37 qpk8s-node-033 kubelet[26615]: I0106 12:43:37.138045 26615 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2"
janv. 06 12:43:37 qpk8s-node-033 kubelet[26615]: I0106 12:43:37.682549 26615 kubelet.go:2465] "SyncLoop (PLEG): event for pod" pod="REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2" event={"ID":"bad3f101-5920-4a11-9c8a-c0dd40326df2","Type":"ContainerStarted","Data":"934b77ca1f12f514ff7d764f31c77607081e06d50a0bbe2d2937b19b5f523c99"}
janv. 06 12:44:17 qpk8s-node-033 kubelet[26615]: I0106 12:44:17.658612 26615 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage"
janv. 06 12:44:17 qpk8s-node-033 kubelet[26615]: I0106 12:44:17.658831 26615 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage"
janv. 06 12:44:17 qpk8s-node-033 kubelet[26615]: I0106 12:44:17.658913 26615 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2","kube-system/nginx-proxy-qpk8s-node-033","logging/filebeat-filebeat-xvfqd","kube-system/calico-node-rwk5j","prometheus/node-exporter-xfvp8","kured/kured-l2b8t","REDACTED-dev/REDACTED-dev12-grp0-pcorr-56f674d4f-hb6jh","REDACTED-dev/REDACTED-dev4-grp0-pgest-595d778d79-sbkww","kube-system/kube-proxy-kp7gw","kube-system/nodelocaldns-q6mg4"]
janv. 06 12:44:22 qpk8s-node-033 kubelet[26615]: I0106 12:44:22.456507 26615 kubelet_node_status.go:679] "Recording event message for node" node="qpk8s-node-033" event="NodeHasDiskPressure"
janv. 06 12:44:23 qpk8s-node-033 kubelet[26615]: I0106 12:44:23.769177 26615 kubelet.go:2465] "SyncLoop (PLEG): event for pod" pod="REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2" event={"ID":"bad3f101-5920-4a11-9c8a-c0dd40326df2","Type":"ContainerStarted","Data":"504dd0177d3f38094dbfbce7bc48e8673e4cc9f97f2b6b6ce2eeda0a20609f54"}
janv. 06 12:44:23 qpk8s-node-033 kubelet[26615]: I0106 12:44:23.769271 26615 kuberuntime_container.go:770] "Killing container with a grace period" pod="REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2" podUID="bad3f101-5920-4a11-9c8a-c0dd40326df2" containerName="REDACTED-dev9-pgest" containerID="containerd://504dd0177d3f38094dbfbce7bc48e8673e4cc9f97f2b6b6ce2eeda0a20609f54" gracePeriod=30
janv. 06 12:44:23 qpk8s-node-033 kubelet[26615]: I0106 12:44:23.782222 26615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2" podStartSLOduration=1.639754381 podStartE2EDuration="47.782147845s" podCreationTimestamp="2025-01-06 12:43:36 +0100 CET" firstStartedPulling="2025-01-06 12:43:37.409859673 +0100 CET m=+3870058.832937451" lastFinishedPulling="2025-01-06 12:44:23.552253138 +0100 CET m=+3870104.975330915" observedRunningTime="2025-01-06 12:44:23.78166704 +0100 CET m=+3870105.204744839" watchObservedRunningTime="2025-01-06 12:44:23.782147845 +0100 CET m=+3870105.205225634"
janv. 06 12:44:24 qpk8s-node-033 kubelet[26615]: I0106 12:44:24.044767 26615 logs.go:325] "Finished parsing log file" path="/var/log/pods/REDACTED-dev_REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2_bad3f101-5920-4a11-9c8a-c0dd40326df2/REDACTED-dev9-pgest/0.log"
janv. 06 12:44:27 qpk8s-node-033 kubelet[26615]: E0106 12:44:27.659220 26615 eviction_manager.go:614] "Eviction manager: pod failed to evict" err="timeout waiting to kill pod" pod="REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2"
janv. 06 12:44:27 qpk8s-node-033 kubelet[26615]: I0106 12:44:27.659261 26615 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["REDACTED-dev/REDACTED-dev9-grp1-pgest-84d9d6ff47-wflf2"]
```
Here is the disk metrics matching those logs (/var/lib/containerd -> imagefs, /var -> nodefs (includes /var/lib/kubelet) ):

### What did you expect to happen?
I expected the the image gc to be called in both cases by the eviction manager.
### How can we reproduce it (as minimally and precisely as possible)?
I don't have a reproducer yet. I'll try to find the time to craft one, but can't give guarantee currently.
Possible [ImageGCNoEviction](https://github.com/kubernetes/kubernetes/blob/86e25a07e279516eb13af436ca5706b1806ea604/test/e2e_node/eviction_test.go#L111) might help ? I'm not completely sure how to use this though 🤔
### Anything else we need to know?
While looking for this in the issues and code, I found the following PR, which might have fixed the problem starting from 1.32:
- #127874 (merged in 1.32)
This led me to formulate the theory that maybe the difference between the two case was a different trigger:
- 1 -> adding container images exceeded the threshold, triggering signalImageFsAvailable which led to correctly gc the unused image
- 2 -> container writing to the containerfs (== not a volume mount) triggered signalContainerFsAvailable which is not handled before the above PR.
I haven't delved enough into that code to know if that theory is plausible, though.
Possibly relevant as well : https://github.com/kubernetes/kubernetes/commit/26923b91e8cbd2d409e5d177ddd509429b76cb35 (split disk kep implementation)
Slack thread : https://kubernetes.slack.com/archives/C0BP8PW9G/p1736523264787059
/sig node
/area kubelet
/cc @AnishShah @kannon92
### Kubernetes version
<details>
```console
$ kubectl versionClient Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.29.10
```
</details>
### Cloud provider
<details>
On-premise, on a vsphere infra (no cloud controller)
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.10 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.10"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.10 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://issues.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.10
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.10"
$ uname -a
Linux <REDACTED> 4.18.0-553.27.1.el8_10.x86_64 #1 SMP Fri Oct 18 06:18:15 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
kubespray
</details>
### Container runtime (CRI) and version (if applicable)
<details>
containerd 1.7.22
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
kind/bug,area/kubelet,sig/node,triage/needs-information,needs-triage
|
low
|
Critical
|
2,787,503,623
|
kubernetes
|
DNS latency when a CoreDNS pod is deleted
|
### What happened?
Hello,
We noticed that when one of our CoreDNS pods is deleted, some client pods experience latency on their DNS queries.
This happens when the pod is completely deleted from Kubernetes, after the `terminating` phase. When it happens, all DNS requests from some pods (not all of them, it seems random) are "stuck" for a few seconds (the value is the timeout value in the pod `resolv.conf` file, so 5 seconds by default but if I set in the pod spec a timeout of 3 seconds in `dnsConfig.options`, it will be 3 seconds at maximum).
You can see on this screenshot how it looks like on the application side (traces are generated using Opentemetry + [httptrace](https://pkg.go.dev/net/http/httptrace)): When the coredns pod is removed (not in terminating phase, completely removed, so after the `lameduck` period, we even tried 17 seconds for lameduck), all requests are waiting for 5 seconds. We can see span durations decreasing because of new requests all wait until the system can send requests again:

We ran `tcpdump` (`tcpdump -w capture.pcap udp port 53`) on the pod namespace (using `nsenter`) and we can indeed see that during 5 seconds, no DNS requests are visible (look at the traces and the wireshark timestamps, they are matching):

We're using Karpenter on our Kubernetes clusters so CoreDNS pods are destroyed regularly. To mitigate the issue, we moved the CoreDNS pods to stable nodes but at every node upgrade, the problem occurs so it's not a good long-term solution (it is also more expensive for us to have dedicated nodes for CoreDNS).
### What did you expect to happen?
We didn't expect any latency during CoreDNS rollouts.
### How can we reproduce it (as minimally and precisely as possible)?
**On AWS EKS**
A simple `kubectl rollout restart -n kube-system deployment coredns` is enough to impact our applications.
**On Exoscale SKS**
I created a 1.31.4 cluster (and also reproduced with kube-proxy 1.32.0 on it) with 5 CoreDNS replicas, and then deployed an application generating DNS traffic on the cluster (it's the only app running on the cluster):
```go
package main
import (
"context"
"errors"
"fmt"
"net"
"os"
"strconv"
"time"
)
func resolve(ctx context.Context, domain string) ([]net.IP, error) {
addrs, err := net.DefaultResolver.LookupIPAddr(ctx, domain)
if err != nil {
return nil, err
}
result := make([]net.IP, len(addrs))
for i, ia := range addrs {
result[i] = ia.IP
}
return result, nil
}
func main() {
domain := os.Getenv("DOMAIN")
if domain == "" {
panic(errors.New("DOMAIN env var is empty"))
}
parallelism, err := strconv.Atoi(os.Getenv("PARALLELISM"))
if err != nil {
panic(err)
}
interval, err := strconv.Atoi(os.Getenv("INTERVAL"))
if err != nil {
panic(err)
}
for i := 0; i < parallelism; i++ {
ticker := time.NewTicker(time.Duration(interval) * time.Millisecond)
go func() {
for {
select {
case <-ticker.C:
ctx, cancel := context.WithTimeout(context.Background(), 7*time.Second)
start := time.Now().UnixMilli()
_, err := resolve(ctx, domain)
cancel()
end := time.Now().UnixMilli()
duration := end - start
if err != nil {
fmt.Printf("%d: resolved in %d milliseconds with error: %s\n", start, duration, err.Error())
} else {
fmt.Printf("%d: resolved in %d milliseconds\n", start, duration)
}
}
}
}()
}
time.Sleep(24000 * time.Second)
}
```
I then deploy this code using this deployment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dns-test
spec:
replicas: 2
selector:
matchLabels:
app: dns-test
template:
metadata:
labels:
app: dns-test
spec:
containers:
- name: dns
image: mcorbin/dnstest:0.0.3
resources:
limits:
memory: "300Mi"
requests:
cpu: "0.5"
memory: "300Mi"
env:
- name: DOMAIN
value: "metrics-server.kube-system.svc.cluster.local."
- name: PARALLELISM
value: "4"
- name: INTERVAL
value: "50"
```
From time to time I can see slow DNS queries after rollout, similar to what I see on EKS:
```
1736868482815: resolved in 5003 milliseconds
1736868482815: resolved in 5003 milliseconds
1736868482815: resolved in 5003 milliseconds
1736868482815: resolved in 5003 milliseconds
```
### Anything else we need to know?
We already investigated a lot of things:
- increased lameduck option on CoreDNS to 17 seconds: no changes
- It's not a CoreDNS performance issue (metrics are good, no latency at all which was verified by enabling debug logs).
- It's not a kube-proxy reconciliation latency issue: kube-proxy logs/metrics are good, endpoints are correctly updated
- We're mostly AWS EKS users but it seems we're also able to reproduce the issue on Exoscale SKS offering.
I suspect a conntrack issue when conntrack entries are removed from kube-proxy. I indeed noticed that cleaning the conntrack manually for CoreDNS IPs was causing the same symptoms
### Kubernetes version
<details>
We reproduced the issue on several Kubernetes versions/cloud providers:
On AWS EKS:
```
Client Version: v1.30.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.8-eks-2d5f260
```
On Exoscale SKS
```console
Client Version: v1.30.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.3
```
I also reproduced on Exoscale SKS with server `v1.31.3` and kube-proxy `v1.32.0` to get [this fix](https://github.com/kubernetes/kubernetes/pull/127318).
The AWS EKS Service Team also told us that they can reproduce the issue on the (unreleased yet to users) `v1.32.0` on their side.
</details>
### Cloud provider
<details>
AWS EKS, Exoscale SKS
</details>
### OS version
<details>
```console
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
SUPPORT_END="2025-06-30"
```
</details>
### Install tools
Both cases use kube-proxy with iptables mode.
### Container runtime (CRI) and version (if applicable)
_No response_
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_
|
kind/bug,needs-sig,needs-triage
|
low
|
Critical
|
2,787,506,259
|
pytorch
|
aot_inductor TIMM convit_base inference regression on dashboard
|
See https://hud.pytorch.org/benchmark/timm_models/inductor_aot_inductor?dashboard=torchinductor&startTime=Tue,%2031%20Dec%202024%2015:26:32%20GMT&stopTime=Tue,%2014%20Jan%202025%2015:26:32%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=main&lCommit=1dab79470dbecef79ba4c7d4308d8a181091e58e&rBranch=main&rCommit=01034e963c9102c6a4a666c7666afd12aee0bfb3
The model reports a pass but the speedup numbers have gone to 0, which might imply that something went wrong in the reporting process?
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @desertfire @chenyang78 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @huydhn
|
high priority,triaged,oncall: pt2,oncall: export,module: aotinductor,pt2-pass-rate-regression
|
low
|
Minor
|
2,787,516,823
|
node
|
The removal of TypeScript types may result in broken JavaScript code
|
### Version
v23.2.0
### Platform
```text
Linux 6.12.9 Fri Dec 20 10:11:49 UTC 2024 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Run the following TypeScript script with nodejs (with the `--experimental-strip-types` option, if necessary):
```ts
function mkId() {
return <T>
(x: T)=>x;
}
const id = mkId();
console.log(id(5));
```
### How often does it reproduce? Is there a required condition?
It always happen deterministically.
### What is the expected behavior? Why is that the expected behavior?
The script should print `5`.
### What do you see instead?
The script errors with:
```
console.log(id(5));
^
TypeError: id is not a function
```
### Additional information
I believe that Node strips TS types in such a way that the script I showed gets turned into the following JS code:
```js
function mkId() {
return
(x: T)=>x;
}
const id = mkId();
console.log(id(5));
```
And unfortunately Node interprets a `return` followed by a new line as a `return;`.
The same behavior happens with different (but similar code), for instance:
```js
function mkId() {
return <
T
>(x: T)=>x;
}
```
It is somewhat common to break templates on a new line, when the generic types are complex (for instance if they have long `extend` clauses) and this issue is particularly annoying because the IDE (which understands TS) doesn't flag it as an error.
|
confirmed-bug,strip-types
|
low
|
Critical
|
2,787,593,946
|
pytorch
|
compile time regression 1/9
|
[TorchInductor OSS Compile Time Dashboard](https://www.internalfb.com/intern/unidash/dashboard/?tab_id=1587385408528217)
- torchbench inference: sam_fast_dynamo_benchmark 70->84
- HF inference: BartForConditionalGeneration 32->42
- TIMM inference (a lot of models regressed)
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka
|
high priority,triaged,oncall: pt2,module: startup-tracing-compile
|
medium
|
Major
|
2,787,606,748
|
flutter
|
`HotRunner._updateDevFS`: `_TypeError: Null check operator used on a null value`
|
This is a top 5 crasher for `3.27.1`
```
Thread 0 main thread_TypeError: Null check operator used on a null value
at HotRunner._updateDevFS(run_hot.dart:501)
at HotRunner._reloadSources(run_hot.dart:1002)
at <asynchronous gap>(async)
at HotRunner._hotReloadHelper(run_hot.dart:914)
at <asynchronous gap>(async)
at HotRunner.restart(run_hot.dart:788)
at <asynchronous gap>(async)
at TerminalHandler._commonTerminalInputHandler(resident_runner.dart:1733)
at <asynchronous gap>(async)
at TerminalHandler.processTerminalInput(resident_runner.dart:1786)
at <asynchronous gap>(async)
```
|
c: crash,tool,P2,team-tool,triaged-tool
|
low
|
Critical
|
2,787,613,532
|
PowerToys
|
allow users to pin their selection of PowerToys to the Quick access panel
|
### Description of the new feature / enhancement
Currently the Quick access panel that pops up from the system tray has a fixed list of PowerToys; it would be helpful if this could be customised so that users could choose which PowerToys are shown here.
### Scenario when this would be used?
typically, whenever I open the Quick access panel, the PowerToy I want to use isn't shown and I have to remember how to open the full PowerToys panel to launch it; it would obviously be preferable to be able to choose the list shown here to include the ones I want to see.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,787,613,593
|
flutter
|
Feature Request: Built-in Flavor Support Using dart-define in Flutter
|
### Use case
**Problems:**
Currently, Flutter developers must rely on custom setups or third-party solutions to implement flavors. While dart-define provides a convenient way to pass runtime configuration variables, there is no official built-in support for managing flavors through it. Developers often have to configure multiple scripts or make manual adjustments to support flavors consistently across environments.
### Proposal
1. **Feature Request Description:** I propose integrating built-in support for managing flavors in Flutter using dart-define. This would streamline the process of configuring and communicating flavor-specific variables, eliminating the need for third-party packages or complex configurations.
2. **Solution**
Enhance the Flutter framework to natively support flavors using dart-define. This can include:
Built-in commands: Extend the flutter build and flutter run commands to accept a --flavor parameter that automatically maps to a predefined set of dart-define variables.
Configuration files: Allow defining flavor-specific configurations in a standardized file (e.g., flavors.yaml).
Documentation and examples: Provide official guidelines and examples on using flavors with dart-define.
4. Implementation in My Project
I implemented a flavor management solution using dart-define in my project, which works seamlessly without third-party packages. I detailed the process in my Medium article: [Command-Line Magic: Automating Flutter Builds with dart-define]
[](https://medium.com/@sayem227/command-line-magic-no-third-party-packages-automating-flutter-builds-with-dart-define-and-apk-5415c1d25003)
Here's a summary of my setup:
Define environment variables using dart-define during the build process.
Access these variables in Dart code using const String.fromEnvironment.
Use a simple command-line script to automate builds for different flavors.
|
waiting for customer response,in triage
|
low
|
Minor
|
2,787,640,662
|
PowerToys
|
Registry Preview: allow pasting a registry key without saving it to a file
|
### Description of the new feature / enhancement
Instead of opening a .REG file, it would be useful to be able to paste the Registry key in directly to preview it. A Paste... button next to the Open... button that popped up a window into which you could paste one or more registry keys would simplify the workflow of adding a new key.
### Scenario when this would be used?
Many sites, including Microsoft Learn, provide registry keys to change OS or application behaviour as content that can be clipped - Learn even has a Copy button to copy these keys.

To use the key in Registry Preview you currently have to save it to a file (remembering to add Windows Registry Editor Version 5.00 to the beginning of the file by hand) and then load it. It would be much quicker to have a Paste button and be able to preview the key without creating, saving and opening a file.
### Supporting information
_No response_
|
Idea-Enhancement,Cost-Small,Product-Registry Preview
|
low
|
Major
|
2,787,642,883
|
pytorch
|
`torch.profiler.record_function` doesn't register kernels from `backward` function
|
### 🐛 Describe the bug
`__profile_kernel_of_func` (`record_function` label) shows zero timings for XPU (maybe for `CUDA` the situation is the same, but I have no way to check) unless `record_function` is used inside `backward` function.
```python
import torch
from torch.profiler import profile, ProfilerActivity, record_function
cache_size = 256
device = "xpu"
class _attention(torch.autograd.Function):
@staticmethod
def forward(ctx, cache):
ctx.save_for_backward(cache)
return cache
@staticmethod
def backward(ctx, triton_do):
cache = ctx.saved_tensors
# with record_function("__profile_kernel_of_func"): <- using this you can get the necessary timings
cache[0].zero_()
return cache
attention = _attention.apply
cache = torch.randn((128, 128), dtype=torch.float32, device=device, requires_grad=True)
triton_o = attention(cache)
triton_do = torch.randn_like(triton_o)
triton_fn = lambda: triton_o.backward(triton_do, retain_graph=True)
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.XPU]) as prof:
with record_function("__profile_kernel_of_func"):
triton_fn()
torch.xpu.synchronize()
print(prof.events())
```
Output:
```
# case1 - `record_function` is not used in `backward` function and one can see that timings are zero
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self XPU Self XPU % XPU total XPU time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
__profile_kernel_of_func 90.70% 193.820ms 90.70% 193.820ms 193.820ms 0.000us 0.00% 0.000us 0.000us 1
autograd::engine::evaluate_function: _attentionBackw... 0.01% 20.711us 3.38% 7.220ms 7.220ms 0.000us 0.00% 8.640us 8.640us 1
_attentionBackward 0.04% 86.246us 3.37% 7.199ms 7.199ms 0.000us 0.00% 8.640us 8.640us 1
aten::zero_ 0.03% 53.549us 3.33% 7.113ms 7.113ms 0.000us 0.00% 8.640us 8.640us 1
aten::fill_ 3.17% 6.772ms 3.30% 7.059ms 7.059ms 8.640us 50.00% 8.640us 8.640us 1
urEnqueueKernelLaunch 0.13% 287.035us 0.13% 287.035us 287.035us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.640us 50.00% 8.640us 8.640us 1
autograd::engine::evaluate_function: torch::autograd... 0.00% 5.280us 5.93% 12.663ms 12.663ms 0.000us 0.00% 8.640us 8.640us 1
torch::autograd::AccumulateGrad 0.02% 34.944us 5.92% 12.658ms 12.658ms 0.000us 0.00% 8.640us 8.640us 1
aten::new_empty_strided 0.00% 7.971us 0.01% 24.983us 24.983us 0.000us 0.00% 0.000us 0.000us 1
aten::empty_strided 0.01% 17.012us 0.01% 17.012us 17.012us 0.000us 0.00% 0.000us 0.000us 1
aten::copy_ 5.86% 12.514ms 5.90% 12.598ms 12.598ms 8.640us 50.00% 8.640us 8.640us 1
urEnqueueKernelLaunch 0.04% 83.934us 0.04% 83.934us 83.934us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.640us 50.00% 8.640us 8.640us 1
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 213.703ms
Self XPU time total: 17.280us
case #2 - `record_function` is used in `backward` function
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self XPU Self XPU % XPU total XPU time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
__profile_kernel_of_func 90.60% 218.883ms 90.60% 218.883ms 218.883ms 0.000us 0.00% 0.000us 0.000us 1
autograd::engine::evaluate_function: _attentionBackw... 0.01% 20.057us 3.60% 8.690ms 8.690ms 0.000us 0.00% 8.320us 8.320us 1
_attentionBackward 0.03% 81.833us 3.59% 8.670ms 8.670ms 0.000us 0.00% 8.320us 8.320us 1
__profile_kernel_of_func 0.10% 230.601us 3.55% 8.588ms 8.588ms 0.000us 0.00% 8.320us 8.320us 1
aten::zero_ 0.02% 54.334us 3.46% 8.358ms 8.358ms 0.000us 0.00% 8.320us 8.320us 1
aten::fill_ 3.30% 7.984ms 3.44% 8.304ms 8.304ms 8.320us 49.06% 8.320us 8.320us 1
urEnqueueKernelLaunch 0.13% 319.102us 0.13% 319.102us 319.102us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.320us 49.06% 8.320us 8.320us 1
autograd::engine::evaluate_function: torch::autograd... 0.00% 6.131us 5.80% 14.019ms 14.019ms 0.000us 0.00% 8.640us 8.640us 1
torch::autograd::AccumulateGrad 0.02% 42.298us 5.80% 14.013ms 14.013ms 0.000us 0.00% 8.640us 8.640us 1
aten::new_empty_strided 0.00% 9.754us 0.01% 26.166us 26.166us 0.000us 0.00% 0.000us 0.000us 1
aten::empty_strided 0.01% 16.412us 0.01% 16.412us 16.412us 0.000us 0.00% 0.000us 0.000us 1
aten::copy_ 5.73% 13.855ms 5.77% 13.945ms 13.945ms 8.640us 50.94% 8.640us 8.640us 1
urEnqueueKernelLaunch 0.04% 90.206us 0.04% 90.206us 90.206us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.640us 50.94% 8.640us 8.640us 1
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 241.593ms
Self XPU time total: 16.960us
```
### Versions
Pytorch pin: `1e881ceecfe80532206ca4e0acb64391fab8b935`.
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
|
oncall: profiler
|
low
|
Critical
|
2,787,670,769
|
godot
|
2D Pathfinding doesn't make shortest paths on tilemaps
|
### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6636) - 11th Gen Intel(R) Core(TM) i5-11600K @ 3.90GHz (12 Threads)
### Issue description
When trying to implement pathfinding via NavigationAgent2D on NavigationLayer of TileMapLayer generated path not shortest most of the time.

Here on screenshot you can see grass and water tiles with navigation layer properly drawn on grass tiles and generated pathfinding route.
### Steps to reproduce
Use NavigationAgent2D and TileMapLayer with active Navigation Layers and drawn such layer on _walkable_ tiles. Generate path using `navigation_agent.target_position = Vector2..`
### Minimal reproduction project (MRP)
[tilemap_pathfinding.zip](https://github.com/user-attachments/files/18413183/tilemap_pathfinding.zip)
|
topic:navigation
|
low
|
Minor
|
2,787,710,102
|
pytorch
|
Adding Infiniband to RDZV Backend for optimal torch run training
|
### 📚 The doc issue
The documentation can include more information on optimally using infiniband for running ML trainings using torchrun. It would be helpful to add the bash commands for users to see how to set the RDZV host to be the the infiniband url. *For Nvidia GPUs*
**Infiniband URL**
One can run `ifconfig` and parse the list to look for a key starting with `ib` (for example ib0 with inet of 10.1.0.12).
**Mellanox Ports and Link Layer**
Run `ibstat` to get the list of mellanox ports and the link layer they are using.
To verify the Mellanox ports being active or not, run `ibv_devinfo`.
**Ping the port**
If you do not have RDMA over Infiniband protocol set, run `ping 10.x.0.y` to see if you can reach the infiniband connection. If you have RDMA installed you can verify the write latency by using `ib_write_lat 10.x.0.y` (to verify write run `ib_write_bw 10.x.0.y`).
**Understanding Mellanox configs**
Section 7.7.1 of the A100 manual states(https://docs.nvidia.com/dgx/pdf/dgxa100-user-guide.pdf) using mlx start and run `sudo mlxconfig -e query | egrep -e Device\|LINK_TYPE` to see if you're using Infiniband or Ethernet.
Run `sudo mlxconfig -y -d <device-path> set LINK_TYPE_P1=<config-number>` to change the link type to IB.
Finally, in your `~/.bashrc`, `export MASTER_ADDR="10.x.0.y"`. and set your RDZV_BACKEND to be $MASTER_ADDR:$MASTER_PORT. Our master port values are usually 8001, 5001 (similar to ones a react dev would use, just a personal preference).
```python
torchrun \
--nnodes 3 \
--nproc_per_node 8 \
--node_rank $NODE_RANK \
--max-restarts $NUM_ALLOWED_FAILURES \
--rdzv-id $RDZV_ID \
--rdzv-backend $RDZV_BACKEND \
--rdzv-endpoint $RDZV_ENDPOINT \
scripts/train/pretrain.py \
```
Maybe these can be incorporated into the documentation? Feel free to reach out to me personally to talk more about optimized trainings including setting FlashAttention, BFloat16, Webdatasets, reading directly from raid, FSDP etc.
<img width="1052" alt="Image" src="https://github.com/user-attachments/assets/919664b1-ba40-4c6f-a7c9-280a8c3d275a" />
`https://pytorch.org/docs/stable/elastic/run.html`
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @brycebortree @sekyondaMeta @AlannaBurke
|
oncall: distributed,module: docs
|
low
|
Critical
|
2,787,827,504
|
vscode
|
The window terminated unexpectedly
|
Cannot start VS Code after the most recent apt upgrade. Always got "The window terminated unexpectedly (reason: 'crashed', code: '132')".
Already tried rebooting the machine and it did not work.
- VS Code Version:
```bash
$ code --version
1.96.3
91fbdddc47bc9c09064bf7acf133d22631cbf083
x64
```
- OS Version:
```bash
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
```
|
info-needed,freeze-slow-crash-leak
|
low
|
Critical
|
2,787,863,701
|
langchain
|
AttributeError while importing langchain_core. prompts , AttributeError: module 'requests' has no attribute 'auth'
|
### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```Python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain_core.messages import AIMessage,HumanMessage,SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[15], line 1
----> 1 from langchain_core.prompts import ChatPromptTemplate
2 from langchain_core.prompts.chat import MessagesPlaceholder
3 from langchain_core.messages import AIMessage,HumanMessage,SystemMessage
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\prompts\__init__.py:28
1 """**Prompt** is the input to the model.
2
3 Prompt is often constructed
(...)
25
26 """ # noqa: E501
---> 28 from langchain_core.prompts.base import (
29 BasePromptTemplate,
30 aformat_document,
31 format_document,
32 )
33 from langchain_core.prompts.chat import (
34 AIMessagePromptTemplate,
35 BaseChatPromptTemplate,
(...)
40 SystemMessagePromptTemplate,
41 )
42 from langchain_core.prompts.few_shot import (
43 FewShotChatMessagePromptTemplate,
44 FewShotPromptTemplate,
45 )
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\prompts\base.py:26
24 from langchain_core.exceptions import ErrorCode, create_message
25 from langchain_core.load import dumpd
---> 26 from langchain_core.output_parsers.base import BaseOutputParser
27 from langchain_core.prompt_values import (
28 ChatPromptValueConcrete,
29 PromptValue,
30 StringPromptValue,
31 )
32 from langchain_core.runnables import RunnableConfig, RunnableSerializable
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\output_parsers\__init__.py:16
1 """**OutputParser** classes parse the output of an LLM call.
2
3 **Class hierarchy:**
(...)
13 Serializable, Generation, PromptValue
14 """ # noqa: E501
---> 16 from langchain_core.output_parsers.base import (
17 BaseGenerationOutputParser,
18 BaseLLMOutputParser,
19 BaseOutputParser,
20 )
21 from langchain_core.output_parsers.json import JsonOutputParser, SimpleJsonOutputParser
22 from langchain_core.output_parsers.list import (
23 CommaSeparatedListOutputParser,
24 ListOutputParser,
25 MarkdownListOutputParser,
26 NumberedListOutputParser,
27 )
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\output_parsers\base.py:16
5 from typing import (
6 TYPE_CHECKING,
7 Any,
(...)
11 Union,
12 )
14 from typing_extensions import override
---> 16 from langchain_core.language_models import LanguageModelOutput
17 from langchain_core.messages import AnyMessage, BaseMessage
18 from langchain_core.outputs import ChatGeneration, Generation
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\language_models\__init__.py:50
1 """**Language Model** is a type of model that can generate text or complete
2 text prompts.
3
(...)
39
40 """ # noqa: E501
42 from langchain_core.language_models.base import (
43 BaseLanguageModel,
44 LangSmithParams,
(...)
48 get_tokenizer,
49 )
---> 50 from langchain_core.language_models.chat_models import BaseChatModel, SimpleChatModel
51 from langchain_core.language_models.fake import FakeListLLM, FakeStreamingListLLM
52 from langchain_core.language_models.fake_chat_models import (
53 FakeListChatModel,
54 FakeMessagesListChatModel,
55 GenericFakeChatModel,
56 ParrotFakeChatModel,
57 )
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\language_models\chat_models.py:33
31 from langchain_core._api import deprecated
32 from langchain_core.caches import BaseCache
---> 33 from langchain_core.callbacks import (
34 AsyncCallbackManager,
35 AsyncCallbackManagerForLLMRun,
36 BaseCallbackManager,
37 CallbackManager,
38 CallbackManagerForLLMRun,
39 Callbacks,
40 )
41 from langchain_core.globals import get_llm_cache
42 from langchain_core.language_models.base import (
43 BaseLanguageModel,
44 LangSmithParams,
45 LanguageModelInput,
46 )
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\callbacks\__init__.py:23
10 from langchain_core.callbacks.base import (
11 AsyncCallbackHandler,
12 BaseCallbackHandler,
(...)
20 ToolManagerMixin,
21 )
22 from langchain_core.callbacks.file import FileCallbackHandler
---> 23 from langchain_core.callbacks.manager import (
24 AsyncCallbackManager,
25 AsyncCallbackManagerForChainGroup,
26 AsyncCallbackManagerForChainRun,
27 AsyncCallbackManagerForLLMRun,
28 AsyncCallbackManagerForRetrieverRun,
29 AsyncCallbackManagerForToolRun,
30 AsyncParentRunManager,
31 AsyncRunManager,
32 BaseRunManager,
33 CallbackManager,
34 CallbackManagerForChainGroup,
35 CallbackManagerForChainRun,
36 CallbackManagerForLLMRun,
37 CallbackManagerForRetrieverRun,
38 CallbackManagerForToolRun,
39 ParentRunManager,
40 RunManager,
41 adispatch_custom_event,
42 dispatch_custom_event,
43 )
44 from langchain_core.callbacks.stdout import StdOutCallbackHandler
45 from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
File d:\BTP Work\virtual_environment\Lib\site-packages\langchain_core\callbacks\manager.py:23
12 from typing import (
13 TYPE_CHECKING,
14 Any,
(...)
19 cast,
20 )
21 from uuid import UUID
---> 23 from langsmith.run_helpers import get_tracing_context
24 from tenacity import RetryCallState
26 from langchain_core.callbacks.base import (
27 BaseCallbackHandler,
28 BaseCallbackManager,
(...)
34 ToolManagerMixin,
35 )
File d:\BTP Work\virtual_environment\Lib\site-packages\langsmith\run_helpers.py:45
15 from typing import (
16 TYPE_CHECKING,
17 Any,
(...)
40 runtime_checkable,
41 )
43 from typing_extensions import Annotated, ParamSpec, TypeGuard, get_args, get_origin
---> 45 from langsmith import client as ls_client
46 from langsmith import run_trees, schemas, utils
47 from langsmith._internal import _aiter as aitertools
File d:\BTP Work\virtual_environment\Lib\site-packages\langsmith\client.py:62
60 import requests
61 from requests import adapters as requests_adapters
---> 62 from requests_toolbelt import ( # type: ignore[import-untyped]
63 multipart as rqtb_multipart,
64 )
65 from typing_extensions import TypeGuard, overload
66 from urllib3.poolmanager import PoolKey # type: ignore[attr-defined, import-untyped]
File d:\BTP Work\virtual_environment\Lib\site-packages\requests_toolbelt\__init__.py:13
2 """
3 requests-toolbelt
4 =================
(...)
9 :license: Apache v2.0, see LICENSE for more details
10 """
12 from .adapters import SSLAdapter, SourceAddressAdapter
---> 13 from .auth.guess import GuessAuth
14 from .multipart import (
15 MultipartEncoder, MultipartEncoderMonitor, MultipartDecoder,
16 ImproperBodyPartContentException, NonMultipartContentTypeException
17 )
18 from .streaming_iterator import StreamingIterator
File d:\BTP Work\virtual_environment\Lib\site-packages\requests_toolbelt\auth\guess.py:6
3 from requests import auth
4 from requests import cookies
----> 6 from . import _digest_auth_compat as auth_compat, http_proxy_digest
9 class GuessAuth(auth.AuthBase):
10 """Guesses the auth type by the WWW-Authentication header."""
File d:\BTP Work\virtual_environment\Lib\site-packages\requests_toolbelt\auth\_digest_auth_compat.py:17
13 def __set__(self, obj, value):
14 setattr(obj._thread_local, self.prop, value)
---> 17 class _HTTPDigestAuth(requests.auth.HTTPDigestAuth):
18 init = _ThreadingDescriptor('init', True)
19 last_nonce = _ThreadingDescriptor('last_nonce', '')
AttributeError: module 'requests' has no attribute 'auth'
### Description
* I'm just trying to import the langchain_core.prompts library for basic prompting practice.
* Upon importing, it says the `requests` module has no attribute `auth`.
### System Info
`python -m langchain_core.sys_info` yields :
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_experimental: 0.3.4
> langchain_google_genai: 2.0.8
> langchain_ollama: 0.2.2
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.4
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> filetype: 1.2.0
> google-generativeai: 0.8.3
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> ollama: 0.4.5
> openai: 1.59.3
> orjson: 3.10.13
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
additionally , `pip freeze` yielded :
aiohappyeyeballs==2.4.4
aiohttp==3.11.11
aiosignal==1.3.2
annotated-types==0.7.0
anyio==4.8.0
asttokens==3.0.0
attrs==24.3.0
cachetools==5.5.0
certifi==2024.12.14
charset-normalizer==3.4.1
colorama==0.4.6
comm==0.2.2
dataclasses-json==0.6.7
debugpy==1.8.11
decorator==5.1.1
distro==1.9.0
executing==2.1.0
filelock==3.16.1
filetype==1.2.0
frozenlist==1.5.0
fsspec==2024.12.0
google-ai-generativelanguage==0.6.10
google-api-core==2.24.0
google-api-python-client==2.157.0
google-auth==2.37.0
google-auth-httplib2==0.2.0
google-generativeai==0.8.3
googleapis-common-protos==1.66.0
greenlet==3.1.1
grpcio==1.69.0
grpcio-status==1.69.0
h11==0.14.0
httpcore==1.0.7
httplib2==0.22.0
httpx==0.27.2
httpx-sse==0.4.0
huggingface-hub==0.27.1
idna==3.10
ipykernel==6.29.5
ipython==8.31.0
jedi==0.19.2
Jinja2==3.1.5
jiter==0.8.2
joblib==1.4.2
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.3
jupyter_core==5.7.2
langchain==0.3.14
langchain-community==0.3.14
langchain-core==0.3.29
langchain-experimental==0.3.4
langchain-google-genai==2.0.8
langchain-ollama==0.2.2
langchain-openai==0.3.0
langchain-text-splitters==0.3.4
langgraph==0.2.62
langgraph-checkpoint==2.0.9
langgraph-sdk==0.1.48
langsmith==0.2.10
MarkupSafe==3.0.2
marshmallow==3.24.1
matplotlib-inline==0.1.7
mpmath==1.3.0
msgpack==1.1.0
multidict==6.1.0
mypy-extensions==1.0.0
nest-asyncio==1.6.0
networkx==3.4.2
numpy==1.26.4
ollama==0.4.5
openai==1.59.3
orjson==3.10.13
packaging==24.2
pandas==2.2.3
parso==0.8.4
pillow==11.1.0
platformdirs==4.3.6
prompt_toolkit==3.0.48
propcache==0.2.1
proto-plus==1.25.0
protobuf==5.29.2
psutil==6.1.1
pure_eval==0.2.3
pyasn1==0.6.1
pyasn1_modules==0.4.1
pydantic==2.10.4
pydantic-settings==2.7.1
pydantic_core==2.27.2
Pygments==2.19.1
pyparsing==3.2.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.2
pywin32==308
PyYAML==6.0.2
pyzmq==26.2.0
regex==2024.11.6
requests==2.32.3
requests-toolbelt==1.0.0
rsa==4.9
safetensors==0.5.0
scikit-learn==1.6.0
scipy==1.15.0
sentence-transformers==3.3.1
six==1.17.0
sniffio==1.3.1
SQLAlchemy==2.0.36
stack-data==0.6.3
sympy==1.13.1
tenacity==9.0.0
threadpoolctl==3.5.0
tiktoken==0.8.0
tokenizers==0.21.0
torch==2.5.1
tornado==6.4.2
tqdm==4.67.1
traitlets==5.14.3
transformers==4.48.0
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.2
uritemplate==4.1.1
urllib3==2.3.0
wcwidth==0.2.13
yarl==1.18.3
|
🤖:bug,investigate,Ɑ: core
|
low
|
Critical
|
2,787,900,922
|
vscode
|
Notebook: NotebookEditorSelectionChangeEvent triggers twice
|
Originally reported in https://github.com/microsoft/vscode-discussions/discussions/1911
> I’m encountering an issue where the NotebookEditorSelectionChangeEvent triggers twice, making it challenging to determine the current selection
|
bug
|
low
|
Minor
|
2,787,933,126
|
godot
|
_make_custom_tooltip() without tooltip text doesn't take over when mouse_filter is Pass.
|
### Tested versions
4.4.dev7
### System information
Godot v4.4.dev7 - Pop!_OS 22.04 LTS on X11 - X11 display driver, Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Mesa Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i5-1235U (12 threads)
### Issue description
https://github.com/godotengine/godot/pull/97961 was supposed to make it so `_make_custom_tooltip()` works even without custom tooltip text. But I found that this doesn't work if Mouse Filter is set to Pass. It even works with the same mouse filter when tooltip_text isn't an empty string. So the implementation of the aforementioned PR fails to work in this specific scenario.
### Steps to reproduce
Implement _make_custom_tooltip() of a control, say a Button, and have it not return null. Set mouse_filter to MOUSE_FILTER_PASS. It won't work.
### Minimal reproduction project (MRP)
[tooltipbug.zip](https://github.com/user-attachments/files/18414260/tooltipbug.zip)
|
bug,topic:gui
|
low
|
Critical
|
2,787,933,848
|
react-native
|
[ios] Inverse scale transforms results in pixelated blurred poor resolution views
|
### Description
We have a production app where we have a pinch gesture handler that allows pinch zooming on a map. When the map is zoomed in (scaled up) the child POIs are scaled down to preserve sizing of POIs. Example, if the parent View is scaled by 100 the child POI View is then scaled by 0.01 to counteract the transform and preserve sizing. This works perfectly on Android; however, IOS results in a very pixelated render of the child POI View. It appears on IOS, when a View has nested scale transforms applied such that they are inverse the resulting render is pixelated. On Android the View appears high resolution as expected. (See screenshots below).
See main code in `app/{tabs}/index.jsx` of repro.
Related:
https://github.com/facebook/react-native/issues/42837
### Steps to reproduce
Both Architectures: Paper & Fabric
IOS
1. Install dependencies `npm install` and `npx pod-install`
2. Run `npx react-native run-ios` or `npm start` and open on ios expo
3. Notice blurred blue View when it should be a solid red square with blue border (bug)
ANDROID
1. Install dependencies `npm install`
2. Run `npx react-native run-android`
3. Notice solid red square with blue border (expected)
### React Native Version
0.76.6
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6
CPU: (16) arm64 Apple M3 Max
Memory: 145.11 MB / 64.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.5
path: ~/.nvm/versions/node/v18.20.5/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v18.20.5/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /Users/marcbourguignon/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2021.2 AI-212.5712.43.2112.8609683
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.4.1
path: /Users/marcbourguignon/.sdkman/candidates/java/current/bin/javac
Ruby:
version: 2.7.5
path: /Users/marcbourguignon/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: ^15.1.3
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
No crash; rendering bug
```
### Reproducer
https://github.com/wen-kai/expo-sandbox/tree/scale-bug
### Screenshots and Videos
Actual (IOS):

Expected (Android):

|
Platform: iOS,Issue: Author Provided Repro,API: Transforms,Type: New Architecture
|
low
|
Critical
|
2,787,943,083
|
PowerToys
|
[RegistryPreview] Keyboard shortcut for standalone mode
|
### Description of the new feature / enhancement
It is possible to use Registry preview in standalone mode. For example to previewing data pasted from clipboard or to opening a file from within the window.
A shortcut for opening Registry Preview would be cool.
### Scenario when this would be used?
Fast opening the app.
### Supporting information
_No response_
|
Idea-Enhancement,Needs-Triage,Product-Registry Preview
|
low
|
Minor
|
2,787,963,735
|
PowerToys
|
[Registry Preview] Save as behavior on save button for temp file
|
### Description of the new feature / enhancement
Instead of showing a save error when clicking on the save button after opening the Regsitry preview **without file**

**the "Save" button should behaves like "Save as". (The same as MS Office or Notepad behaves.)**
### Scenario when this would be used?
Improve the user experience.
### Supporting information
_No response_
|
Idea-Enhancement,Area-User Interface,Cost-Small,Product-Registry Preview
|
low
|
Critical
|
2,787,964,534
|
flutter
|
[windows] Unhandled exception at 0x00007FFC4362F8A8 (flutter_windows.dll) in myapp.exe: An invalid parameter was passed to a function that considers invalid parameters fatal.
|
### Steps to reproduce
Hello, i am having a native windows crash in a pretty large enterprise app, i have captured a minidump with heap with visual studio that i am attaching. I am not sure what is the cause of the crash on the dart side so to say, i cannot pinpoint in code what the app is doing that causes the crash. There are lots of thing running some in parallel also.
Edit: I have provided below a minim reproducible example.
This is the native dump zipped [download](https://intergraphro-my.sharepoint.com/:u:/g/personal/iulian_ingr_ro/EfPa6GTXvyBFk3nq0C3ptj4BxQz8HYh44-4G5ZuMl7La5A?e=5jz0tg)
### Expected results
I expect the flutter engine not crash.
### Actual results
Flutter native engine crashes unexpectedly.
### Code sample
I cannot provide a minimum reproducible example because i don't know where is it crashing and the app codebase is very large. Furthermore the crash is on the native side where i as a flutter framework user have no access or desire to have access to it.
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.27.2, on Microsoft Windows [Version 10.0.19045.5247], locale en-GB)
• Flutter version 3.27.2 on channel stable at C:\fluttersdk
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (24 hours ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\AndroidSDK\androidSDK
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = C:\AndroidSDK\androidSDK
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.12.3)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.12.35527.113
• Windows 10 SDK version 10.0.20348.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] IntelliJ IDEA Community Edition (version 2024.2)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2024.2.3
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
[√] VS Code (version 1.96.3)
• VS Code at C:\Users\csman\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version
10.0.19045.5247]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.112
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
waiting for customer response,platform-windows,a: desktop,team-windows
|
low
|
Critical
|
2,787,969,637
|
kubernetes
|
https://console.cloud.google.com/storage/browser/kubernetes-release/release/ does not have latest releases
|
### What happened?
I have noticed that only the alpha version of K8s 1.32.x is available on https://console.cloud.google.com/storage/browser/kubernetes-release/release/ and only up to K8s 1.31.0. Latest update to this bucket seems to have been months ago. Is this related to https://github.com/kubernetes/kubernetes/issues/127595 / https://github.com/kubernetes/kubernetes/issues/127350 ?
### What did you expect to happen?
Latest releases including up to K8s 1.31.4 and K8s 1.32.0 should be available at https://console.cloud.google.com/storage/browser/kubernetes-release/release/
### How can we reproduce it (as minimally and precisely as possible)?
visit https://console.cloud.google.com/storage/browser/kubernetes-release/release/ and see missing releases
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
kind/bug,needs-sig,needs-triage
|
low
|
Minor
|
2,787,974,035
|
PowerToys
|
[RegistryPreview] Crash clicking save on message when closing
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update, GitHub
### Running as admin
No
### Area(s) with issue?
Registry Preview
### Steps to reproduce
1. Open Registry Preview without file.
2. Create content for a file.
3. Close app window.
4. Click "save" button.
### ✔️ Expected Behavior
Error message or save as dialog.
### ❌ Actual Behavior
App crashes.
### Other Software
_No response_
|
Issue-Bug,Priority-3,Status-Reproducible,Product-Registry Preview
|
low
|
Critical
|
2,787,996,172
|
yt-dlp
|
[tubitv] An extractor error has occurred (caused by KeyError(`video_id`)) tubi
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
I have found references to extractor errors, but not for this site. Downloads were working yesterday (1/13/2025). Have tried multiple URLs, including one that I successfully downloaded yesterday.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--retries', 'infinite', '--socket-timeout', '10', '-o', 'E:\\vidl\\uncategorized\\%(title)s.%(ext)s', 'https://tubitv.com/movies/100025513/seven-days']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds [dade5e35c] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds)
[tubitv] Extracting URL: https://tubitv.com/movies/100025513/seven-days
[tubitv] 100025513: Downloading webpage
ERROR: 100025513: An extractor error has occurred. (caused by KeyError('100025513')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\tubitv.py", line 100, in _real_extract
KeyError: '100025513'
```
|
geo-blocked,site-bug
|
low
|
Critical
|
2,788,028,918
|
pytorch
|
torch.compile() within TorchDispatchMode always causes an unknown guard failure.
|
### 🐛 Describe the bug
When I run torch.compile() under an "infra" TorchDispatchMode, it seems that a recompile always happens, but I don't know what guard is failing:
```
import torch
from torch.overrides import TorchFunctionMode
from torch.utils._python_dispatch import TorchDispatchMode
from torch._dynamo import config
class MyFunctionMode(TorchFunctionMode):
def __torch_function__(self, func, types, args, kwargs=None):
return func(*args, **(kwargs or {}))
class MyDispatchMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args, kwargs=None):
return func(*args, **(kwargs or {}))
@classmethod
def is_infra_mode(cls):
return True
def f(x, y):
return x @ y
x = torch.ones(10, device="cuda")
mode = MyFunctionMode()
f_compiled = torch.compile(f, backend="eager")
for i in range(2):
if i == 0:
config.error_on_recompile = False
if i == 1:
config.error_on_recompile = True
with mode:
f_compiled(x, x)
mode = MyDispatchMode()
for i in range(2):
if i == 0:
config.error_on_recompile = False
if i == 1:
config.error_on_recompile = True
with mode:
f_compiled(x, x)
```
Running the above script on top-of-tree pytorch gives the following error message:
```
I0114 18:25:17.922947 2151712 torch/_dynamo/utils.py:1521] [0/0] ChromiumEventLogger initialized with id eeb788f2-8d2b-4de5-adf7-22df55d8491d
I0114 18:25:17.924832 2151712 torch/_dynamo/symbolic_convert.py:2744] [0/0] Step 1: torchdynamo start tracing f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:17.925518 2151712 torch/fx/experimental/symbolic_shapes.py:3243] [0/0] create_env
I0114 18:25:17.946520 2151712 torch/_dynamo/symbolic_convert.py:3066] [0/0] Step 1: torchdynamo done tracing f (RETURN_VALUE)
I0114 18:25:17.950973 2151712 torch/_dynamo/output_graph.py:1460] [0/0] Step 2: calling compiler function eager
I0114 18:25:17.951271 2151712 torch/_dynamo/output_graph.py:1465] [0/0] Step 2: done compiler function eager
I0114 18:25:17.954654 2151712 torch/fx/experimental/symbolic_shapes.py:4623] [0/0] produce_guards
I0114 18:25:17.956163 2151712 torch/_dynamo/pgo.py:647] [0/0] put_code_state: no cache key, skipping
I0114 18:25:17.956523 2151712 torch/_dynamo/convert_frame.py:1078] [0/0] run_gc_after_compile: running gc
I0114 18:25:17.984054 2151712 torch/_dynamo/symbolic_convert.py:2744] [0/1] Step 1: torchdynamo start tracing f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:17.984482 2151712 torch/fx/experimental/symbolic_shapes.py:3243] [0/1] create_env
I0114 18:25:17.988030 2151712 torch/_dynamo/symbolic_convert.py:3066] [0/1] Step 1: torchdynamo done tracing f (RETURN_VALUE)
I0114 18:25:17.989872 2151712 torch/_dynamo/output_graph.py:1460] [0/1] Step 2: calling compiler function eager
I0114 18:25:17.990141 2151712 torch/_dynamo/output_graph.py:1465] [0/1] Step 2: done compiler function eager
I0114 18:25:17.992269 2151712 torch/fx/experimental/symbolic_shapes.py:4623] [0/1] produce_guards
I0114 18:25:17.993348 2151712 torch/_dynamo/pgo.py:647] [0/1] put_code_state: no cache key, skipping
I0114 18:25:17.993675 2151712 torch/_dynamo/convert_frame.py:1078] [0/1] run_gc_after_compile: running gc
Traceback (most recent call last):
File "/home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py", line 44, in <module>
f_compiled(x, x)
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 1422, in __call__
return self._torchdynamo_orig_callable(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 1203, in __call__
result = self._inner_convert(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 569, in __call__
return _compile(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 920, in _compile
recompile_reasons = get_and_maybe_log_recompilation_reason(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/guards.py", line 2780, in get_and_maybe_log_recompilation_reason
raise exc.RecompileError(message)
torch._dynamo.exc.RecompileError: Recompiling function f in /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
triggered by the following guard failure(s):
- 0/1:
- 0/0: ___check_torch_function_mode_stack()
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] ]
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] TorchDynamo compilation metrics:
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] Function, Runtimes (s)
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] _compile.compile_inner, 0.0418
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] OutputGraph.call_user_compiler, 0.0016
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] gc, 0.0016
```
You can see from this section that three compiles happen: the first compile under MyTorchFunctionMode, the first compile under MyTorchDispatchMode, and the second compile under MyTorchDispatchMode:
```
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] ]
```
@mlazos since you worked on #131828, do you know if this is expected? For reasons related to #140979: https://github.com/pytorch/pytorch/pull/140979/files#r1877221096
I realize just after having linke dto that that there is a brief answer to my question, but I will make this issue nonetheless for documentation purposes.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+gitcd1b9e4
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 25
On-line CPU(s) list: 0-24
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 25
Stepping: 1
BogoMIPS: 5491.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor fsrm flush_l1d
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 800 KiB (25 instances)
L1i cache: 800 KiB (25 instances)
L2 cache: 25 MiB (25 instances)
L3 cache: 800 MiB (25 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-24
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+gitcd1b9e4
[pip3] triton==3.2.0+git35c6c7c6
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+gitcd1b9e4 dev_0 <develop>
[conda] triton 3.2.0+git35c6c7c6 pypi_0 pypi
```
cc @Chillee @ezyang @zou3519 @albanD @samdow @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
|
triaged,module: __torch_dispatch__,oncall: pt2,module: dynamo
|
low
|
Critical
|
2,788,046,079
|
PowerToys
|
[Registry Preview] Enhanced preview windows for value data
|
### Description of the new feature / enhancement
Currently you only have the value data preview in the data grid cell and #36631 implements a tool tip to have a better preview (especially for multi line data).
I would be great to have a button in the cell to show a content dialog with a read only text box. That supports copy paste the text form the preview and would be more accessibility conform.
### Scenario when this would be used?
When previewing registry value data and especially for REG_MULTI_SZ.
### Supporting information
The text box has to support line breaks only for `REG_MULTI_SZ`.
|
Idea-Enhancement,Area-User Interface,Needs-Triage,Product-Registry Preview
|
low
|
Minor
|
2,788,081,105
|
PowerToys
|
Mouse Without Borders mouse rubberbanding with 1000hz/polling rate for mouse and keyboards
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Mouse Without Borders does not seem to work well with high Hz (polling rate), mouse and possibly keyboards as well.
at 1000hz, my mouse gets laggy in Mouse Without Borders, but it works perfectly in 500hz and 125hz.
I even did a mouse polling rate test on my 2nd PC, mouse set to 1000hz, and the highest (peak) it registered was 700hz.
if nothing else works, an easy fix could be a button in Mouse Without Borders: "Limit mouse polling rate" that limits polling rate to 125hz for both mouse and keyboard but returns to original when its closed.
also I am not 100% sure but I think a high polling rate for keyboards might make it a bit buggy at times when typing fast.
### ✔️ Expected Behavior
gaming mouse with fairly normal gaming mouse-polling rate to be accurate and work well on my 2nd computer with mouse without borders
### ❌ Actual Behavior
rubber-banding mouse movements on desktop 2 when on 1000hz mouse polling rate, 125hz and 500hz seems to be fine, maybe 125hz slightly better than 500hz.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.