id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,624,041,021 | rust | Terse diagnostic for never type fallback lint warning involving try operator and placeholder in path | Compiling the following function warns in Rust 1.81 and later:
```rust
pub fn foo() -> std::io::Result<()> {
[1, 2, 3]
.into_iter()
.map(|_| -> std::io::Result<_> { Ok(()) })
.collect::<std::io::Result<_>>()?;
Ok(())
}
```
I expected it to compile without warnings, as it did with previous Rust versions. It currently produces this warning:
```
warning: this function depends on never type fallback being `()`
--> src/lib.rs:1:1
|
1 | pub fn foo() -> std::io::Result<()> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #123748 <https://github.com/rust-lang/rust/issues/123748>
= help: specify the types explicitly
note: in edition 2024, the requirement `!: FromIterator<()>` will fail
--> src/lib.rs:5:20
|
5 | .collect::<std::io::Result<_>>()?;
| ^^^^^^^^^^^^^^^^^^
= note: `#[warn(dependency_on_unit_never_type_fallback)]` on by default
```
[Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=b80e4ca71ff5a21f56fe1122d8bdf734)
What is particularly perplexing about this warning is that I don't understand how the never type is even involved, as nothing in the code appears to diverge.
The current workaround is to add `let () = ...` before the iteration, which compiles without warnings:
```rust
// no warning
pub fn foo() -> std::io::Result<()> {
let () = [1, 2, 3]
.into_iter()
.map(|_| -> std::io::Result<_> { Ok(()) })
.collect::<std::io::Result<_>>()?;
Ok(())
}
```
While this workaround works, it feels unnecessary and it's hard to explain and justify. | A-lints,A-diagnostics,T-compiler,D-terse,L-dependency_on_unit_never_type_fallback | low | Critical |
2,624,101,663 | tauri | [feat] Option to generate android code as a fragment | ### Describe the problem
I have an android app that is completely built on only single activity with multiple fragments. When I want to extend it with tauri generated android app, i have to have it as fragment instead of activity.
### Describe the solution you'd like
Having an config option or flag to generate tauri android code as fragment instead of activity
### Alternatives considered
_No response_
### Additional context
Or is there a way i can import tauri generated android code main activity as a fragment in my native code?
| type: feature request | low | Minor |
2,624,103,177 | pytorch | `torch._numpy.ndarray.astype()` does not accept Numpy Dtypes correctly | ### 🐛 Describe the bug
Running
```python
import torch._numpy as xp
import numpy as np
x = xp.arange(4)
y = x.astype(xp.float32)
z = x.astype(np.float32)
```
raises
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sese6502/environments/finest/lib/python3.12/site-packages/torch/_numpy/_ndarray.py", line 307, in astype
torch_dtype = _dtypes.dtype(dtype).torch_dtype
^^^^^^^^^^^^^^^^^^^^
File "/home/sese6502/environments/finest/lib/python3.12/site-packages/torch/_numpy/_dtypes.py", line 271, in dtype
return DType(arg)
^^^^^^^^^^
File "/home/sese6502/environments/finest/lib/python3.12/site-packages/torch/_numpy/_dtypes.py", line 289, in __init__
sctype = arg.dtype._scalar_type
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'getset_descriptor' object has no attribute '_scalar_type'
```
in the last line.
EDIT: Similar things happen, if I call `torch._numpy.empty((1,1), dtype=np.float32)`. The code basically fails at the same location.
However,
```
import cupy as xp
import numpy as np
x = xp.arange(4)
y = x.astype(np.float32)
z = x.astype(xp.float32)
```
works without issues. The same hold true for `import dask as xp`.
So I would expect that the numpy backend of torch also behaves nicely in this moment.
### Versions
My output of `collect_env.py` is
```
Collecting environment information...
PyTorch version: 2.4.1.post3
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: AlmaLinux release 8.9 (Midnight Oncilla) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-513.18.1.el8_9.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA L40S
GPU 1: NVIDIA L40S
GPU 2: NVIDIA L40S
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7343 16-Core Processor
Stepping: 1
CPU MHz: 3200.000
CPU max MHz: 3940.6250
CPU min MHz: 1500.0000
BogoMIPS: 6400.38
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr $
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] torch==2.4.1.post3
[conda] libtorch 2.4.1 cpu_generic_h169fe36_3 conda-forge
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 2.1.2 py312h58c1407_0 conda-forge
[conda] pytorch 2.4.1 cpu_generic_py312h2b7556c_3 conda-forge
```
cc @mruberry @rgommers | triaged,module: numpy | low | Critical |
2,624,144,481 | vscode | After update, app is completely unusable | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95
- OS Version: Ubuntu 22.04.4 LTS
Steps to Reproduce:
1. Open vscode
2. Screen is gray, buttons are unresponsive, even after reboot.
3. 
| bug,gpu | medium | Critical |
2,624,145,040 | next.js | Multiple Google fonts not accessible globally, when imported in the root layout page, in the same format as the init. Geist fonts. | ### Link to the code that reproduces this issue
https://github.com/winterdelta/font-investigate
### To Reproduce
Init. a new `repo`, e.g. via `bun create next-app`. Import `next/google/fonts`. Add the `fonts` in the root layout component:
```
import { IBM_Plex_Mono, IBM_Plex_Sans } from "next/font/google";
<body className={`${ibm_mono.className} ${ibm_sans.className}`}>
{children}
</body>
```
Here, if the order of the fonts is:
- mono and sans (in sequence) > mono font will be default, throughout the app.
- sans and mono (in sequence) > sans font will be default, throughout the app.
even if the CSS variable, e.g. `font-family: var(--font-ibm-plex-mono);` declares otherwise.
The second font will not be available, e.g. when declared via CSS variables, possibly, other inconsistent behaviour too.
### Current vs. Expected behavior
On init., 2 Geist fonts are installed via `create next-app` and are declared as a template-literal in the root layout. If the user replaces these local Geist fonts for Google fonts **in the same format**, only the first font will be accessible. Additional / multiple fonts will not be accessible from nested components.
---
Behaviour for Google fonts should match the behaviour of the Geist local fonts. Adding multiple fonts as template literals into the root body tag should allow access through nested components.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Fri Aug 16 19:20:34 PDT 2024; root:xnu-11215.40.42~4/RELEASE_ARM64_T8112
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 22.2.0
npm: 10.7.0
Yarn: 1.22.21
pnpm: 9.0.6
Relevant Packages:
next: 15.0.3-canary.1 // Latest available version is detected (15.0.3-canary.1).
eslint-config-next: 15.0.1
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Font (next/font)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | bug,Font (next/font) | low | Minor |
2,624,179,767 | pytorch | Draft-mode export: ep.run_decompositions() doesn't run with real tensor prop | Repro: patch in https://github.com/pytorch/pytorch/pull/139213 (needed for an error to show up), then run the following script:
```py
import torch
import torch._functorch.config
@torch.library.custom_op("export::foo", mutates_args={}) # E: Untyped decorator makes fun
def foo(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
return x * y
class Foo(torch.nn.Module):
def forward(self, x, y): # E: Function is missing a type annotation [no-untyped-def]
return foo(x, y)
model = Foo()
inputs = (torch.randn(1, 3), torch.randn(2, 1))
with torch._functorch.config.patch(fake_tensor_propagate_real_tensors=True):
ep = torch.export.export_for_training(model, inputs)
nodes = list(ep.module().graph.nodes)
ep.run_decompositions({})
```
What's going on is that:
- the included PR allows real_tensor_prop to infer a meta kernel for an operator during export
- the exported program does not include real tensors along with the FakeTensors (should it?)
- run_decompositions does some re-tracing without propagate_real_tensors because there are no real tensors
- run_decompositions errors due to no meta (the inferred meta kernel isn't persistent)
A fix could be that "if you're not decomposing the operator, then run_decompositions uses the existing FakeTensors to infer a meta kernel". But there's a more general question of if there are any more downstream graph passes that will have problems due to not having access to the real tensors.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,624,200,267 | pytorch | test/export/test_retraceability.py fails locally, likely flaky | test/export/test_retraceability.py fails locally for me, when running head to toe:
```
FAILED [0.2012s] test/export/test_retraceability.py::RetraceExportTestExport::test_slice_with_floordiv_retraceability - AssertionError: RuntimeError not ra...
FAILED [0.1438s] test/export/test_retraceability.py::RetraceExportNonStrictTestExport::test_slice_with_floordiv_retraceability_non_strict - AssertionError:...
```
When running the tests individually, they pass.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,624,213,683 | storybook | [Bug]: Support of default Angular's inputs | ### Describe the bug
I just updated to the latest Angular (18.2.9) and Storybook (8.3.6) and was surprised to find that it doesn't support signals
I found a lot of discussions, but they all talk about input and output signals. I'm talking about a simple `signal(...)`
I found [this](https://github.com/storybookjs/storybook/pull/26413) and [this](https://github.com/storybookjs/storybook/pull/26546).
And I even see that you supported `model()`.
But I don't understand why regular `singal(...)` is not supported?
Storybook has always allowed you to work with component fields without requiring you to make them inputs.
this makes sense because public fields are what html uses. But inputs are a contract with the outside world.
### Reproduction link
https://stackblitz.com/edit/github-mbwd5e-mebzvr?file=src%2Fapp%2Fsignal-ex%2Fsignal-ex.stories.ts
### Reproduction steps
example: https://stackblitz.com/edit/github-mbwd5e-mebzvr?file=src%2Fapp%2Fsignal-ex%2Fsignal-ex.stories.ts
let's say I have a component with a field.
this field can be updated asynchronously. Previously I had to use `BehaviorSubject` for this and I could override the value from Storybook.
now I want to use `signal()`.
Storybook does not work with `signal`, but it works with `model`
``` typescript
@Component({
selector: 'signal-ex',
standalone: true,
imports: [CommonModule],
template: `Field: {{ field }} <br> Signal: {{ signalField() }} <br> Model: {{ modelField() }}`,
styleUrls: [],
})
export class SignalExComponent {
field = false; // <- we can't use fields with ChangeDetection.OnPush
field$ = new BehaviorSubject<boolean>(false);
fieldSignal = signal(false);
fieldModel = model(false);
toggle() {
this.field = true;
this.field$.next(true);
this.fieldSignal.set(true);
this.fieldModel.set(true);
}
}
```
story:
``` typescript
const meta: Meta<SignalExComponent> = {
component: SignalExComponent,
tags: ['autodocs'],
};
export default meta;
type Story = StoryObj<SignalExComponent>;
export const Primary: Story = {
args: {
field: true,
field$: new BehaviorSubject<boolean>(true),
fieldSignal: true as any, // <- doesn't work
fieldModel: true,
},
};
```
### System
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (20) x64 13th Gen Intel(R) Core(TM) i7-13700H
Binaries:
Node: 20.11.1 - C:\Program Files\nodejs\node.EXE
npm: 10.2.4 - C:\Program Files\nodejs\npm.CMD <----- active
Browsers:
Edge: Chromium (129.0.2792.79)
npmPackages:
@storybook/addon-a11y: ^8.3.5 => 8.3.6
@storybook/addon-actions: ^8.3.5 => 8.3.6
@storybook/addon-essentials: ^8.3.5 => 8.3.6
@storybook/addon-interactions: ^8.3.5 => 8.3.6
@storybook/addon-links: ^8.3.5 => 8.3.6
@storybook/addons: ^7.6.17 => 7.6.17
@storybook/angular: ^8.3.5 => 8.3.6
@storybook/core-server: 8.3.5 => 8.3.5
@storybook/testing-library: 0.2.2 => 0.2.2
chromatic: ^11.12.5 => 11.16.1
msw-storybook-addon: 2.0.3 => 2.0.3
storybook: 8.3.6 => 8.3.6
storybook-addon-pseudo-states: ^4.0.2 => 4.0.2
### Additional context
_No response_ | bug,help wanted,angular | low | Critical |
2,624,350,297 | vscode | still cannot open files or folders in linux 24 | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.x and the new version 1.95
- OS Version: Linux Ubuntu KDE 24.04
Steps to Reproduce:
1. File -> Open http://shot.screen1.me/dv9qzSzKXyQY8mGqpqpzw4.png
2 - System crashed after open folder or file : http://shot.screen1.me/x8TQFwWmkH5xLhdEuUvMDg.png
I am using version 1.93.1 to continue using the IDE normally.
| bug,snap,confirmation-pending,native-file-dialog | low | Critical |
2,624,371,252 | angular | Automating Component Dependency Imports in Angular | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
Automating the import of necessary component dependencies would significantly enhance developer productivity and reduce repetitive code.
Currently, developers are required to manually create an array of essential dependencies that need to be imported for a component to function properly. This process can be tedious and error-prone, as it requires developers to remember which dependencies are necessary and add them to the imports array themselves.
### Proposed solution
Is it possible for Angular to automate this process by generating the imports array based on the component class (using ES6 imports) and the associated template?
For instance, consider the following component:
```
import { CommonModule } from '@angular/common';
import { Component, inject, signal, ChangeDetectionStrategy, viewChild } from '@angular/core';
import { FormControl, ReactiveFormsModule } from '@angular/forms';
import { MatButtonModule } from '@angular/material/button';
import { MatChipsModule } from '@angular/material/chips';
import { MatDialog } from '@angular/material/dialog';
import { MatFormFieldModule } from '@angular/material/form-field';
import { MatIconModule } from '@angular/material/icon';
import { MatInputModule } from '@angular/material/input';
import { MatMenuModule } from '@angular/material/menu';
import { MatPaginator, MatPaginatorModule } from '@angular/material/paginator';
import { MatProgressSpinnerModule } from '@angular/material/progress-spinner';
import { MatSnackBar } from '@angular/material/snack-bar';
import { MatTableDataSource, MatTableModule } from '@angular/material/table';
import { MatToolbarModule } from '@angular/material/toolbar';
import { MatTooltipModule } from '@angular/material/tooltip';
import { Params, RouterModule } from '@angular/router';
@Component({
selector: 'app-user-list',
imports: [
CommonModule,
MatTableModule,
MatPaginatorModule,
MatFormFieldModule,
MatInputModule,
MatToolbarModule,
MatProgressSpinnerModule,
MatButtonModule,
MatIconModule,
RouterModule,
MatTooltipModule,
MatChipsModule,
MatMenuModule,
ReactiveFormsModule,
],
changeDetection: ChangeDetectionStrategy.OnPush,
templateUrl: './user-list.component.html',
styleUrls: ['./user-list.component.scss'],
})
export class UserListComponent {}
```
This component could be defined without manually listing the imports array, reducing redundancy:
```
import { CommonModule } from '@angular/common';
import { Component, inject, signal, ChangeDetectionStrategy, viewChild } from '@angular/core';
import { FormControl, ReactiveFormsModule } from '@angular/forms';
import { MatButtonModule } from '@angular/material/button';
import { MatChipsModule } from '@angular/material/chips';
import { MatDialog } from '@angular/material/dialog';
import { MatFormFieldModule } from '@angular/material/form-field';
import { MatIconModule } from '@angular/material/icon';
import { MatInputModule } from '@angular/material/input';
import { MatMenuModule } from '@angular/material/menu';
import { MatPaginator, MatPaginatorModule } from '@angular/material/paginator';
import { MatProgressSpinnerModule } from '@angular/material/progress-spinner';
import { MatSnackBar } from '@angular/material/snack-bar';
import { MatTableDataSource, MatTableModule } from '@angular/material/table';
import { MatToolbarModule } from '@angular/material/toolbar';
import { MatTooltipModule } from '@angular/material/tooltip';
import { Params, RouterModule } from '@angular/router';
@Component({
selector: 'app-user-list',
changeDetection: ChangeDetectionStrategy.OnPush,
templateUrl: './user-list.component.html',
styleUrls: ['./user-list.component.scss'],
})
export class UserListComponent {}
```
Currently, a significant amount of code is duplicated, and developers must remember to add the necessary dependencies to the imports array manually. Automating this process would streamline development, minimize errors, and allow developers to focus on building features rather than managing boilerplate code.
### Alternatives considered
- Provide an `Angular CLI` command that scans component files and generates the required imports automatically.
- Create an IDE or editor extension that assists developers in identifying and adding required imports as they code. | area: core,cross-cutting: standalone | low | Critical |
2,624,394,493 | langchain | set_llm_cache doesn't work for AgentExecutors | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
set_llm_cache(SQLiteCache("test_llm_cache.sqlite"))
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
model = ChatOpenAI(model="gpt-4o")
@tool
def magic_function(input: int) -> int:
"""Applies a magic function to an input."""
return input + 2
tools = [magic_function]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
### Error Message and Stack Trace (if applicable)
# Test with Agent
```python3
%%time
agent.invoke({"input":"what is the value of magic_function(3)?","intermediate_steps":[]})
```
_CPU times: user 47.2 ms, sys: 9.25 ms, total: 56.4 ms
Wall time: 1 s_
[ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log="\nInvoking: `magic_function` with `{'input': 3}`\n\n\n", message_log=[AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_LjQj5h58R846IcmFx8KEyZt8', 'function': {'arguments': '{"input":3}', 'name': 'magic_function'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 61, 'total_tokens': 75, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_90354628f2', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-8598a1e7-be43-4b08-88ec-f3200d8a4333-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_LjQj5h58R846IcmFx8KEyZt8', 'type': 'tool_call'}], usage_metadata={'input_tokens': 61, 'output_tokens': 14, 'total_tokens': 75})], tool_call_id='call_LjQj5h58R846IcmFx8KEyZt8')]
```python3
#second time
%%time
agent.invoke({"input":"what is the value of magic_function(3)?","intermediate_steps":[]})
```
_CPU times: user 6.18 ms, sys: 2.82 ms, total: 9.01 ms
Wall time: 8.82 ms_
[ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log="\nInvoking: `magic_function` with `{'input': 3}`\n\n\n", message_log=[AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_LjQj5h58R846IcmFx8KEyZt8', 'function': {'arguments': '{"input":3}', 'name': 'magic_function'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 61, 'total_tokens': 75, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_90354628f2', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-8598a1e7-be43-4b08-88ec-f3200d8a4333-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_LjQj5h58R846IcmFx8KEyZt8', 'type': 'tool_call'}], usage_metadata={'input_tokens': 61, 'output_tokens': 14, 'total_tokens': 75})], tool_call_id='call_LjQj5h58R846IcmFx8KEyZt8')]
# Try with AgentExecutor
```python3
%%time
agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
> Entering new AgentExecutor chain...
> Invoking: `magic_function` with `{'input': 3}`
> The value of `magic_function(3)` is 5.
> Finished chain.
_CPU times: user 54.2 ms, sys: 6.7 ms, total: 60.9 ms
Wall time: 1.65 s_
{'input': 'what is the value of magic_function(3)?',
'output': 'The value of `magic_function(3)` is 5.'}
```python3
%%time
agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
> Entering new AgentExecutor chain...
> Invoking: `magic_function` with `{'input': 3}`
> The value of `magic_function(3)` is 5.
> Finished chain.
_CPU times: user 37.6 ms, sys: 4.93 ms, total: 42.5 ms
Wall time: 1.54 s_
{'input': 'what is the value of magic_function(3)?',
'output': 'The value of `magic_function(3)` is 5.'}
### Description
set_llm_cache doesn't work for AgentExecutors
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:36:30 PDT 2024; root:xnu-11215.1.12~1/RELEASE_X86_64
> Python Version: 3.12.7 (main, Oct 1 2024, 02:05:46) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.8
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.130
> langchain_anthropic: 0.2.1
> langchain_aws: 0.2.1
> langchain_groq: 0.2.0
> langchain_mistralai: 0.2.0
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.1
> langchain_text_splitters: 0.3.0
> langchain_together: 0.2.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.8
> anthropic: 0.34.2
> async-timeout: Installed. No version info available.
> boto3: 1.35.32
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> groq: 0.11.0
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.1
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> tokenizers: 0.20.0
> typing-extensions: 4.12.2 | 🤖:bug,investigate | low | Critical |
2,624,448,570 | react-native | [0.76] Error: Cannot find module '@react-native-community/cli-server-api' | ### Description
`react-native bundle` and `react-native start` are currently not correctly registered. After some investigation, this is what's happening:
- `@react-native/community-cli-plugin` requires `@react-native-community/cli-server-api` to register commands like `bundle` and `start`
- However, the former declares it as a peer dependency and also making it optional: https://github.com/facebook/react-native/blob/3c02738ec4c36d8414493ef8f0016a809d849d33/packages/community-cli-plugin/package.json#L36-L41
- The responsibility to fulfill the requirement falls on whoever consumes `@react-native/community-cli-plugin`, which in this case is `react-native`. But `react-native` does not declare dependency on `@react-native-community/cli-server-api`. And since it's marked as optional, package managers don't complain.
- Failing to load `@react-native-community/cli-server-api` means that `bundle` and `start` never gets registered and won't show up in config.
One way to fix this is to make `@react-native-community/cli-server-api` required again, and forward that dependency in `react-native`. This should make the consumer the responsible for satisfying the dependency. This also means that the template needs to be updated to include this dependency. I'm sure there are other alternatives that I have overlooked.
An alternative to that would be to make `@react-native/community-cli-plugin` or `react-native` directly depend on `@react-native/community-cli-plugin`.
A third option would be to move the package to the RN repo and make it part of `@react-native/community-cli-plugin`
### Steps to reproduce
1. Clone/check out this branch: https://github.com/microsoft/rnx-kit/pull/3409
2. Remove workarounds in `.yarnrc.yml` (the `react-native` section under `resolutions`)
3. Run `yarn`
4. Apply the following fixes directly under `packages/test-app/node_modules/`:
- Cherry-pick https://github.com/facebook/react-native/pull/47304
- Cherry-pick https://github.com/facebook/react-native/pull/47308
5. Run `react-native config` inside `packages/test-app`
6. Verify that `bundle` and `start` are missing
### React Native Version
0.76.1
### Affected Platforms
Build - MacOS, Build - Windows, Build - Linux
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7
CPU: (10) arm64 Apple M1 Max
Memory: 1.54 GB / 64.00 GB
Shell:
version: 3.7.1
path: /opt/homebrew/bin/fish
Binaries:
Node:
version: 20.18.0
path: /private/var/folders/j0/5zfnwvyd4sb15smylklx03m00000gn/T/xfs-27033e9b/node
Yarn:
version: 4.4.0
path: /private/var/folders/j0/5zfnwvyd4sb15smylklx03m00000gn/T/xfs-27033e9b/yarn
npm:
version: 10.8.2
path: ~/.local/bin/npm
Watchman:
version: 2024.09.30.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /Users/tido/.gem/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.4
- iOS 17.4
- macOS 14.4
- tvOS 17.4
- visionOS 1.1
- watchOS 10.4
Android SDK:
API Levels:
- "34"
Build Tools:
- 30.0.3
- 33.0.0
- 33.0.1
- 34.0.0
System Images:
- android-34 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12071903
Xcode:
version: 15.3/15E204a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.5
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: ^15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: ^0.76.0
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
Error: Cannot find module '@react-native-community/cli-server-api'
Require stack:
- /~/node_modules/.store/@react-native-community-cli-plugin-virtual-29d9c8df52/package/dist/commands/start/runServer.js
- /~/node_modules/.store/@react-native-community-cli-plugin-virtual-29d9c8df52/package/dist/commands/start/index.js
- /~/node_modules/.store/@react-native-community-cli-plugin-virtual-29d9c8df52/package/dist/index.js
- /~/node_modules/.store/react-native-virtual-5d869b466d/package/react-native.config.js
- /~/node_modules/.store/cosmiconfig-virtual-f7d5522c5c/package/dist/loaders.js
- /~/node_modules/.store/cosmiconfig-virtual-f7d5522c5c/package/dist/defaults.js
- /~/node_modules/.store/cosmiconfig-virtual-f7d5522c5c/package/dist/index.js
- /~/node_modules/.store/@react-native-community-cli-config-npm-15.0.0-1758a65588/package/build/readConfigFromDisk.js
- /~/node_modules/.store/@react-native-community-cli-config-npm-15.0.0-1758a65588/package/build/loadConfig.js
- /~/node_modules/.store/@react-native-community-cli-config-npm-15.0.0-1758a65588/package/build/index.js
- /~/node_modules/.store/@react-native-community-cli-npm-15.0.0-2240e43604/package/build/commands/index.js
- /~/node_modules/.store/@react-native-community-cli-npm-15.0.0-2240e43604/package/build/index.js
- /~/node_modules/.store/react-native-virtual-5d869b466d/package/cli.js
```
### Reproducer
https://github.com/microsoft/rnx-kit/pull/3409
### Screenshots and Videos
_No response_ | p: Microsoft,Partner,Never gets stale,0.76 | low | Critical |
2,624,484,398 | react | [compiler] Support annotating hook factories that produce stable hooks | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
nope
### Repro steps
The application I'm developing follows a specific pattern for accessing Redux store data based on parameters. Here's an example:
```jsx
const ProductTile = ({ productId }) => {
const useProductSelector = createUseSelectorWithParam(productId)
const title = useProductSelector(productStore.selectLabel)
const value = useProductSelector(productStore.selectValue)
const brandId = useProductSelector(productStore.selectBrandId)
const useBrandSelector = createUseSelectorWithParam(brandId)
const subTitle = useBrandSelector(brandStore.selectLabel)
return <Tile title={title} value={value} subTitle={subTitle} />
}
```
We currently use this "higher-order function" (HOF) approach in several thousand places across the codebase, with code like the following:
```jsx
export const createUseSelectorWithParam = (param) => {
const useSelectorWithParam = (selector) =>
useSelector((state) => selector(state, param))
return useSelectorWithParam
}
```
This approach reduces code complexity (fewer arrow functions) and enhances team productivity by decreasing informational noise. However, it currently lacks compatibility with the new React Compiler.
**Question**: Is there a way to inform the React Compiler that the result of `createUseSelectorWithParam` should be treated as a stable hook?
We're hesitant to replace thousands of instances across the codebase with arrow functions or to add parameters universally, as it would likely reduce readability (sometimes only one parameter is needed to access Redux store data, but other times two or even three are required). Additionally, with a higher number of parameters, making such extensive changes could lead to more bugs, as manually adjusting parameters in multiple places increases the chance of errors.
### How often does this bug happen?
Every time
### What version of React are you using?
18.2
### What version of React Compiler are you using?
19.0.0-beta-6fc168f-20241025 | Type: Feature Request,Component: Optimizing Compiler | low | Critical |
2,624,487,293 | pytorch | FSDP1 SHARD_GRAD_OP broken after torch upgrade to 2.4.1 and flash_attn upgrade | ### 🐛 Describe the bug
Problem seems to be as torch setStorage run. If I recall, the code used to work on torch 2.2 . If I disable FSDP, and the DDP Optimizer, everything works fine. Version fails on 2.4.. I tried setting dynamic to False to fix some issues with Flash Attention complaining a lot and got the same issue.
### Error logs
```
[rank7]:W1029 15:41:39.251000 140228203374400 torch/_dynamo/variables/tensor.py:715] [0/0] Graph break from `Tensor.item()`, consider setting:
[rank7]:W1029 15:41:39.251000 140228203374400 torch/_dynamo/variables/tensor.py:715] [0/0] torch._dynamo.config.capture_scalar_outputs = True
[rank7]:W1029 15:41:39.251000 140228203374400 torch/_dynamo/variables/tensor.py:715] [0/0] or:
[rank7]:W1029 15:41:39.251000 140228203374400 torch/_dynamo/variables/tensor.py:715] [0/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
[rank7]:W1029 15:41:39.251000 140228203374400 torch/_dynamo/variables/tensor.py:715] [0/0] to include these operations in the captured graph.
[rank7]:W1029 15:41:39.251000 140228203374400 torch/_dynamo/variables/tensor.py:715] [0/0]
[rank7]: [rank7]: [rank7]: ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
[rank7]: [rank7]: [rank7]: │ /llmlib/scripts/train_mosaic_bert.py:287 in <module> │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 284 │ cli_cfg = om.from_cli(args_list) │
[rank7]: [rank7]: [rank7]: │ 285 │ cfg = om.merge(yaml_cfg, cli_cfg) │
[rank7]: [rank7]: [rank7]: │ 286 │ cfg = cast(DictConfig, cfg) # for type checking │
[rank7]: [rank7]: [rank7]: │ ❱ 287 │ main(cfg) │
[rank7]: [rank7]: [rank7]: │ 288 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /llmlib/scripts/train_mosaic_bert.py:274 in main │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 271 │ │
[rank7]: [rank7]: [rank7]: │ 272 │ if do_train: │
[rank7]: [rank7]: [rank7]: │ 273 │ │ print('Starting training...') │
[rank7]: [rank7]: [rank7]: │ ❱ 274 │ │ trainer.fit() │
[rank7]: [rank7]: [rank7]: │ 275 │ │
[rank7]: [rank7]: [rank7]: │ 276 │ if return_trainer: │
[rank7]: [rank7]: [rank7]: │ 277 │ │ return trainer │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/trainer.py:2467 in fit │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 2464 │ │ │ self.state.scaler = ClosureGradScaler() if self._use_clos │
[rank7]: [rank7]: [rank7]: │ 2465 │ │ │
[rank7]: [rank7]: [rank7]: │ 2466 │ │ self.first_batch_complete = False │
[rank7]: [rank7]: [rank7]: │ ❱ 2467 │ │ self._train_loop() │
[rank7]: [rank7]: [rank7]: │ 2468 │ │ │
[rank7]: [rank7]: [rank7]: │ 2469 │ │ # Zero gradients at the end of fit so same model/optimizer ca │
[rank7]: [rank7]: [rank7]: │ 2470 │ │ # with checkpoint loading. See https://github.com/pytorch/pyt │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/trainer.py:2687 in │
[rank7]: [rank7]: [rank7]: │ _train_loop │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 2684 │ │ │ │ │ self.logger.log_metrics({'time/token': self.state │
[rank7]: [rank7]: [rank7]: │ 2685 │ │ │ │ │ self.logger.log_metrics({'time/token_in_epoch': s │
[rank7]: [rank7]: [rank7]: │ 2686 │ │ │ │ │
[rank7]: [rank7]: [rank7]: │ ❱ 2687 │ │ │ │ total_loss_dict = self._train_batch(use_grad_scaling) │
[rank7]: [rank7]: [rank7]: │ 2688 │ │ │ │ │
[rank7]: [rank7]: [rank7]: │ 2689 │ │ │ │ if use_grad_scaling: │
[rank7]: [rank7]: [rank7]: │ 2690 │ │ │ │ │ self.state.scaler.update() │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/trainer.py:2907 in │
[rank7]: [rank7]: [rank7]: │ _train_batch │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 2904 │ │ │ │ │ │ │ │ **kwargs: self._train_microbatches(mi │
[rank7]: [rank7]: [rank7]: │ 2905 │ │ │ │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 2906 │ │ │ │ │ │ else: │
[rank7]: [rank7]: [rank7]: │ ❱ 2907 │ │ │ │ │ │ │ optimizer.step( │
[rank7]: [rank7]: [rank7]: │ 2908 │ │ │ │ │ │ │ │ closure=lambda loss_dict=total_loss_d │
[rank7]: [rank7]: [rank7]: │ 2909 │ │ │ │ │ │ │ │ **kwargs: self._train_microbatches(mi │
[rank7]: [rank7]: [rank7]: │ 2910 │ │ │ │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/optim/lr_scheduler.py:130 in wrapper │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 127 │ │ │ │ def wrapper(*args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 128 │ │ │ │ │ opt = opt_ref() │
[rank7]: [rank7]: [rank7]: │ 129 │ │ │ │ │ opt._opt_called = True # type: ignore[union-attr │
[rank7]: [rank7]: [rank7]: │ ❱ 130 │ │ │ │ │ return func.__get__(opt, opt.__class__)(*args, ** │
[rank7]: [rank7]: [rank7]: │ 131 │ │ │ │ │
[rank7]: [rank7]: [rank7]: │ 132 │ │ │ │ wrapper._wrapped_by_lr_sched = True # type: ignore[a │
[rank7]: [rank7]: [rank7]: │ 133 │ │ │ │ return wrapper │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/optim/optimizer.py:484 in wrapper │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 481 │ │ │ │ │ │ │ │ f"{func} must return None or a tuple │
[rank7]: [rank7]: [rank7]: │ 482 │ │ │ │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 483 │ │ │ │ │
[rank7]: [rank7]: [rank7]: │ ❱ 484 │ │ │ │ out = func(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 485 │ │ │ │ self._optimizer_step_code() │
[rank7]: [rank7]: [rank7]: │ 486 │ │ │ │ │
[rank7]: [rank7]: [rank7]: │ 487 │ │ │ │ # call optimizer step post hooks │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/utils/_contextlib.py:116 in │
[rank7]: [rank7]: [rank7]: │ decorate_context │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 113 │ @functools.wraps(func) │
[rank7]: [rank7]: [rank7]: │ 114 │ def decorate_context(*args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 115 │ │ with ctx_factory(): │
[rank7]: [rank7]: [rank7]: │ ❱ 116 │ │ │ return func(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 117 │ │
[rank7]: [rank7]: [rank7]: │ 118 │ return decorate_context │
[rank7]: [rank7]: [rank7]: │ 119 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/optim/decoupled_weight_decay.py:308 │
[rank7]: [rank7]: [rank7]: │ in step │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 305 │ │ loss = None │
[rank7]: [rank7]: [rank7]: │ 306 │ │ if closure is not None: │
[rank7]: [rank7]: [rank7]: │ 307 │ │ │ with torch.enable_grad(): │
[rank7]: [rank7]: [rank7]: │ ❱ 308 │ │ │ │ loss = closure() │
[rank7]: [rank7]: [rank7]: │ 309 │ │ │
[rank7]: [rank7]: [rank7]: │ 310 │ │ for group in self.param_groups: │
[rank7]: [rank7]: [rank7]: │ 311 │ │ │ params_with_grad = [] │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/trainer.py:2909 in <lambda> │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 2906 │ │ │ │ │ │ else: │
[rank7]: [rank7]: [rank7]: │ 2907 │ │ │ │ │ │ │ optimizer.step( │
[rank7]: [rank7]: [rank7]: │ 2908 │ │ │ │ │ │ │ │ closure=lambda loss_dict=total_loss_d │
[rank7]: [rank7]: [rank7]: │ ❱ 2909 │ │ │ │ │ │ │ │ **kwargs: self._train_microbatches(mi │
[rank7]: [rank7]: [rank7]: │ 2910 │ │ │ │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 2911 │ │ │ │ else: │
[rank7]: [rank7]: [rank7]: │ 2912 │ │ │ │ │ self._train_microbatches(microbatches, total_loss │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/trainer.py:3078 in │
[rank7]: [rank7]: [rank7]: │ _train_microbatches │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 3075 │ │ │ for microbatch_idx, self.state.batch in enumerate(microba │
[rank7]: [rank7]: [rank7]: │ 3076 │ │ │ │ self.state.batch = self.state.device.batch_to_device( │
[rank7]: [rank7]: [rank7]: │ 3077 │ │ │ │ is_final_microbatch = microbatch_idx + 1 == len(micro │
[rank7]: [rank7]: [rank7]: │ ❱ 3078 │ │ │ │ microbatch_loss_dict = self._train_microbatch(use_gra │
[rank7]: [rank7]: [rank7]: │ 3079 │ │ │ │ │
[rank7]: [rank7]: [rank7]: │ 3080 │ │ │ │ # Aggregate each loss in microbatch_loss_dict into to │
[rank7]: [rank7]: [rank7]: │ 3081 │ │ │ │ for k, microbatch_loss in microbatch_loss_dict.items( │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/trainer.py:3154 in │
[rank7]: [rank7]: [rank7]: │ _train_microbatch │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 3151 │ │ │ │ self.state.precision_config, │
[rank7]: [rank7]: [rank7]: │ 3152 │ │ │ │ self.state.deepspeed_enabled, │
[rank7]: [rank7]: [rank7]: │ 3153 │ │ │ ): │
[rank7]: [rank7]: [rank7]: │ ❱ 3154 │ │ │ │ self.state.outputs = self.state.model(self.state.batc │
[rank7]: [rank7]: [rank7]: │ 3155 │ │ │ │
[rank7]: [rank7]: [rank7]: │ 3156 │ │ │ self.engine.run_event(Event.AFTER_FORWARD) │
[rank7]: [rank7]: [rank7]: │ 3157 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1553 in │
[rank7]: [rank7]: [rank7]: │ _wrapped_call_impl │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1550 │ │ if self._compiled_call_impl is not None: │
[rank7]: [rank7]: [rank7]: │ 1551 │ │ │ return self._compiled_call_impl(*args, **kwargs) # type: │
[rank7]: [rank7]: [rank7]: │ 1552 │ │ else: │
[rank7]: [rank7]: [rank7]: │ ❱ 1553 │ │ │ return self._call_impl(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 1554 │ │
[rank7]: [rank7]: [rank7]: │ 1555 │ def _call_impl(self, *args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 1556 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_s │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1562 in _call_impl │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1559 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
[rank7]: [rank7]: [rank7]: │ 1560 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
[rank7]: [rank7]: [rank7]: │ 1561 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
[rank7]: [rank7]: [rank7]: │ ❱ 1562 │ │ │ return forward_call(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 1563 │ │ │
[rank7]: [rank7]: [rank7]: │ 1564 │ │ try: │
[rank7]: [rank7]: [rank7]: │ 1565 │ │ │ result = None │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:433 in _fn │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 430 │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 431 │ │ │ │
[rank7]: [rank7]: [rank7]: │ 432 │ │ │ try: │
[rank7]: [rank7]: [rank7]: │ ❱ 433 │ │ │ │ return fn(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 434 │ │ │ finally: │
[rank7]: [rank7]: [rank7]: │ 435 │ │ │ │ # Restore the dynamic layer stack depth if necessary. │
[rank7]: [rank7]: [rank7]: │ 436 │ │ │ │ torch._C._functorch.pop_dynamic_layer_stack_and_undo_ │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/models/huggingface.py:488 in forward │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 485 │ │ if isinstance(batch, Mapping): │
[rank7]: [rank7]: [rank7]: │ 486 │ │ │ # Further input validation is left to the huggingface forw │
[rank7]: [rank7]: [rank7]: │ 487 │ │ │ batch = {k: v for k, v in batch.items() if k in self.model │
[rank7]: [rank7]: [rank7]: │ ❱ 488 │ │ │ output = self.model(**batch) # type: ignore (thirdparty) │
[rank7]: [rank7]: [rank7]: │ 489 │ │ else: │
[rank7]: [rank7]: [rank7]: │ 490 │ │ │ raise ValueError( │
[rank7]: [rank7]: [rank7]: │ 491 │ │ │ │ 'Unexpected batch type. Expected a dictionary with key │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1553 in │
[rank7]: [rank7]: [rank7]: │ _wrapped_call_impl │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1550 │ │ if self._compiled_call_impl is not None: │
[rank7]: [rank7]: [rank7]: │ 1551 │ │ │ return self._compiled_call_impl(*args, **kwargs) # type: │
[rank7]: [rank7]: [rank7]: │ 1552 │ │ else: │
[rank7]: [rank7]: [rank7]: │ ❱ 1553 │ │ │ return self._call_impl(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 1554 │ │
[rank7]: [rank7]: [rank7]: │ 1555 │ def _call_impl(self, *args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 1556 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_s │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1603 in _call_impl │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1600 │ │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hook │
[rank7]: [rank7]: [rank7]: │ 1601 │ │ │ │ args = bw_hook.setup_input_hook(args) │
[rank7]: [rank7]: [rank7]: │ 1602 │ │ │ │
[rank7]: [rank7]: [rank7]: │ ❱ 1603 │ │ │ result = forward_call(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 1604 │ │ │ if _global_forward_hooks or self._forward_hooks: │
[rank7]: [rank7]: [rank7]: │ 1605 │ │ │ │ for hook_id, hook in ( │
[rank7]: [rank7]: [rank7]: │ 1606 │ │ │ │ │ *_global_forward_hooks.items(), │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/distributed/fsdp/fully_sharded_data_par │
[rank7]: [rank7]: [rank7]: │ allel.py:849 in forward │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 846 │ │ ): │
[rank7]: [rank7]: [rank7]: │ 847 │ │ │ args, kwargs = _root_pre_forward(self, self, args, kwargs │
[rank7]: [rank7]: [rank7]: │ 848 │ │ │ unused = None │
[rank7]: [rank7]: [rank7]: │ ❱ 849 │ │ │ args, kwargs = _pre_forward( │
[rank7]: [rank7]: [rank7]: │ 850 │ │ │ │ self, │
[rank7]: [rank7]: [rank7]: │ 851 │ │ │ │ handle, │
[rank7]: [rank7]: [rank7]: │ 852 │ │ │ │ _pre_forward_unshard, │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/distributed/fsdp/_runtime_utils.py:381 │
[rank7]: [rank7]: [rank7]: │ in _pre_forward │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 378 │ │ if handle: │
[rank7]: [rank7]: [rank7]: │ 379 │ │ │ handle._training_state = HandleTrainingState.FORWARD │
[rank7]: [rank7]: [rank7]: │ 380 │ │ if unshard_fn is not None: │
[rank7]: [rank7]: [rank7]: │ ❱ 381 │ │ │ unshard_fn(state, handle) │
[rank7]: [rank7]: [rank7]: │ 382 │ │ # Register post-backward hooks to reshard the parameters and │
[rank7]: [rank7]: [rank7]: │ 383 │ │ # their gradients. They must be re-registered every forward p │
[rank7]: [rank7]: [rank7]: │ 384 │ │ # the `grad_fn` is mutated. │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/distributed/fsdp/_runtime_utils.py:416 │
[rank7]: [rank7]: [rank7]: │ in _pre_forward_unshard │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 413 │ # If the handles have been prefetched, then there is no need to c │
[rank7]: [rank7]: [rank7]: │ 414 │ # `_unshard()` again │
[rank7]: [rank7]: [rank7]: │ 415 │ if not handle._prefetched: │
[rank7]: [rank7]: [rank7]: │ ❱ 416 │ │ _unshard(state, handle, state._unshard_stream, state._pre_uns │
[rank7]: [rank7]: [rank7]: │ 417 │ handle._needs_pre_forward_unshard = False │
[rank7]: [rank7]: [rank7]: │ 418 │ # Don't wait during trace │
[rank7]: [rank7]: [rank7]: │ 419 │ if not torch.distributed._functional_collectives.is_torchdynamo_c │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/distributed/fsdp/_runtime_utils.py:300 │
[rank7]: [rank7]: [rank7]: │ in _unshard │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 297 │ │ │ ): │
[rank7]: [rank7]: [rank7]: │ 298 │ │ │ │ event.synchronize() │
[rank7]: [rank7]: [rank7]: │ 299 │ with state._device_handle.stream(unshard_stream): │
[rank7]: [rank7]: [rank7]: │ ❱ 300 │ │ handle.unshard() │
[rank7]: [rank7]: [rank7]: │ 301 │ │ handle.post_unshard() │
[rank7]: [rank7]: [rank7]: │ 302 │
[rank7]: [rank7]: [rank7]: │ 303 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/_patch_pytorch.py:1009 in │
[rank7]: [rank7]: [rank7]: │ unshard_with_sync │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1006 │ from torch.distributed.fsdp._flat_param import FlatParamHandle │
[rank7]: [rank7]: [rank7]: │ 1007 │ original_unshard = FlatParamHandle.unshard │
[rank7]: [rank7]: [rank7]: │ 1008 │ │
[rank7]: [rank7]: [rank7]: │ ❱ 1009 │ @no_type_check │
[rank7]: [rank7]: [rank7]: │ 1010 │ def unshard_with_sync(self): │
[rank7]: [rank7]: [rank7]: │ 1011 │ │ """Run the unshard logic, but with a sync after a :meth:`_all │
[rank7]: [rank7]: [rank7]: │ 1012 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/composer/trainer/_patch_pytorch.py:1039 in │
[rank7]: [rank7]: [rank7]: │ torch_dynamo_resume_in_unshard_with_sync_at_1039 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1036 │ │ found_cuda_oom_tensor = torch.tensor([0], dtype=torch.uint8). │
[rank7]: [rank7]: [rank7]: │ 1037 │ │ │
[rank7]: [rank7]: [rank7]: │ 1038 │ │ dist.all_reduce(found_cuda_oom_tensor, reduce_operation='MAX' │
[rank7]: [rank7]: [rank7]: │ ❱ 1039 │ │ found_cuda_oom = found_cuda_oom_tensor.item() │
[rank7]: [rank7]: [rank7]: │ 1040 │ │ # Signal current rank is still in batch │
[rank7]: [rank7]: [rank7]: │ 1041 │ │ all_ranks_finished_tensor = torch.tensor([0], dtype=torch.uin │
[rank7]: [rank7]: [rank7]: │ 1042 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:600 in _fn │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 597 │ │ def _fn(*args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 598 │ │ │ prior = set_eval_frame(callback) │
[rank7]: [rank7]: [rank7]: │ 599 │ │ │ try: │
[rank7]: [rank7]: [rank7]: │ ❱ 600 │ │ │ │ return fn(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 601 │ │ │ finally: │
[rank7]: [rank7]: [rank7]: │ 602 │ │ │ │ set_eval_frame(prior) │
[rank7]: [rank7]: [rank7]: │ 603 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_functorch/aot_autograd.py:987 in │
[rank7]: [rank7]: [rank7]: │ forward │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 984 │ │ full_args = [] │
[rank7]: [rank7]: [rank7]: │ 985 │ │ full_args.extend(params_flat) │
[rank7]: [rank7]: [rank7]: │ 986 │ │ full_args.extend(runtime_args) │
[rank7]: [rank7]: [rank7]: │ ❱ 987 │ │ return compiled_fn(full_args) │
[rank7]: [rank7]: [rank7]: │ 988 │ │
[rank7]: [rank7]: [rank7]: │ 989 │ # Just for convenience │
[rank7]: [rank7]: [rank7]: │ 990 │ forward.zero_grad = mod.zero_grad │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappe │
[rank7]: [rank7]: [rank7]: │ rs.py:217 in runtime_wrapper │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 214 │ │ │ try: │
[rank7]: [rank7]: [rank7]: │ 215 │ │ │ │ if grad_enabled: │
[rank7]: [rank7]: [rank7]: │ 216 │ │ │ │ │ torch._C._set_grad_enabled(False) │
[rank7]: [rank7]: [rank7]: │ ❱ 217 │ │ │ │ all_outs = call_func_at_runtime_with_args( │
[rank7]: [rank7]: [rank7]: │ 218 │ │ │ │ │ compiled_fn, args, disable_amp=disable_amp, steal │
[rank7]: [rank7]: [rank7]: │ 219 │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 220 │ │ │ finally: │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_functorch/_aot_autograd/utils.py:120 │
[rank7]: [rank7]: [rank7]: │ in call_func_at_runtime_with_args │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 117 │ context = torch._C._DisableAutocast if disable_amp else nullcontex │
[rank7]: [rank7]: [rank7]: │ 118 │ with context(): │
[rank7]: [rank7]: [rank7]: │ 119 │ │ if hasattr(f, "_boxed_call"): │
[rank7]: [rank7]: [rank7]: │ ❱ 120 │ │ │ out = normalize_as_list(f(args)) │
[rank7]: [rank7]: [rank7]: │ 121 │ │ else: │
[rank7]: [rank7]: [rank7]: │ 122 │ │ │ # TODO: Please remove soon │
[rank7]: [rank7]: [rank7]: │ 123 │ │ │ # https://github.com/pytorch/pytorch/pull/83137#issuecomme │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappe │
[rank7]: [rank7]: [rank7]: │ rs.py:451 in wrapper │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 448 │ │ │ │ │ runtime_metadata.num_forward_returns, │
[rank7]: [rank7]: [rank7]: │ 449 │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 450 │ │ │ │ return out │
[rank7]: [rank7]: [rank7]: │ ❱ 451 │ │ │ return compiled_fn(runtime_args) │
[rank7]: [rank7]: [rank7]: │ 452 │ │ │
[rank7]: [rank7]: [rank7]: │ 453 │ │ return wrapper │
[rank7]: [rank7]: [rank7]: │ 454 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/codecache.py:1131 in __call__ │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 1128 │ │
[rank7]: [rank7]: [rank7]: │ 1129 │ def __call__(self, inputs: List[Any]) -> Any: │
[rank7]: [rank7]: [rank7]: │ 1130 │ │ assert self.current_callable is not None │
[rank7]: [rank7]: [rank7]: │ ❱ 1131 │ │ return self.current_callable(inputs) │
[rank7]: [rank7]: [rank7]: │ 1132 │
[rank7]: [rank7]: [rank7]: │ 1133 │
[rank7]: [rank7]: [rank7]: │ 1134 def cpp_compiler() -> str: │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/compile_fx.py:944 in run │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 941 │ │
[rank7]: [rank7]: [rank7]: │ 942 │ def run(new_inputs): │
[rank7]: [rank7]: [rank7]: │ 943 │ │ copy_misaligned_inputs(new_inputs, inputs_to_check) │
[rank7]: [rank7]: [rank7]: │ ❱ 944 │ │ return model(new_inputs) │
[rank7]: [rank7]: [rank7]: │ 945 │ │
[rank7]: [rank7]: [rank7]: │ 946 │ return run │
[rank7]: [rank7]: [rank7]: │ 947 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /tmp/torchinductor_root/p7/cp7mtlyomhhf7t4rpjcvw6y456kkfpckmd5xryemyrunauchz │
[rank7]: [rank7]: [rank7]: │ bcv.py:84 in call │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 81 │ │ del arg0_1 │
[rank7]: [rank7]: [rank7]: │ 82 │ │ # Source Nodes: [], Original ATen: [] │
[rank7]: [rank7]: [rank7]: │ 83 │ │ stream7 = get_raw_stream(7) │
[rank7]: [rank7]: [rank7]: │ ❱ 84 │ │ triton_poi_fused_0.run(buf0, arg1_1, 212553216, grid=grid(2125 │
[rank7]: [rank7]: [rank7]: │ 85 │ │ del arg1_1 │
[rank7]: [rank7]: [rank7]: │ 86 │ │ del buf0 │
[rank7]: [rank7]: [rank7]: │ 87 │ return () │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 820 in run │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 817 │ │ │ if len(self.launchers) == 0: │
[rank7]: [rank7]: [rank7]: │ 818 │ │ │ │ self.precompile() │
[rank7]: [rank7]: [rank7]: │ 819 │ │ │ if len(self.launchers) > 1: │
[rank7]: [rank7]: [rank7]: │ ❱ 820 │ │ │ │ self.autotune_to_one_config(*args, grid=grid, **kwarg │
[rank7]: [rank7]: [rank7]: │ 821 │ │ │
[rank7]: [rank7]: [rank7]: │ 822 │ │ if not getattr( │
[rank7]: [rank7]: [rank7]: │ 823 │ │ │ self.launchers[0].config, "found_by_coordesc", False │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 718 in autotune_to_one_config │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 715 │ def autotune_to_one_config(self, *args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 716 │ │ """Do the actual autotuning""" │
[rank7]: [rank7]: [rank7]: │ 717 │ │ start_time = time.time_ns() │
[rank7]: [rank7]: [rank7]: │ ❱ 718 │ │ timings = self.benchmark_all_configs(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 719 │ │ time_taken_ns = time.time_ns() - start_time │
[rank7]: [rank7]: [rank7]: │ 720 │ │ self.launchers = [builtins.min(timings, key=timings.get)] │
[rank7]: [rank7]: [rank7]: │ 721 │ │ if self.save_cache_hook: │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_dynamo/utils.py:231 in time_wrapper │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 228 │ │ │ try: │
[rank7]: [rank7]: [rank7]: │ 229 │ │ │ │ with torch.profiler.record_function(f"{key} (dynamo_t │
[rank7]: [rank7]: [rank7]: │ 230 │ │ │ │ │ t0 = time.time() │
[rank7]: [rank7]: [rank7]: │ ❱ 231 │ │ │ │ │ r = func(*args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 232 │ │ │ │ │ time_spent = time.time() - t0 │
[rank7]: [rank7]: [rank7]: │ 233 │ │ │ │ compilation_time_metrics[key].append(time_spent) │
[rank7]: [rank7]: [rank7]: │ 234 │ │ │ except Exception as e: │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 693 in benchmark_all_configs │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 690 │ │
[rank7]: [rank7]: [rank7]: │ 691 │ @dynamo_timed │
[rank7]: [rank7]: [rank7]: │ 692 │ def benchmark_all_configs(self, *args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ ❱ 693 │ │ timings = { │
[rank7]: [rank7]: [rank7]: │ 694 │ │ │ launcher: self.bench(launcher, *args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 695 │ │ │ for launcher in self.launchers │
[rank7]: [rank7]: [rank7]: │ 696 │ │ } │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 694 in <dictcomp> │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 691 │ @dynamo_timed │
[rank7]: [rank7]: [rank7]: │ 692 │ def benchmark_all_configs(self, *args, **kwargs): │
[rank7]: [rank7]: [rank7]: │ 693 │ │ timings = { │
[rank7]: [rank7]: [rank7]: │ ❱ 694 │ │ │ launcher: self.bench(launcher, *args, **kwargs) │
[rank7]: [rank7]: [rank7]: │ 695 │ │ │ for launcher in self.launchers │
[rank7]: [rank7]: [rank7]: │ 696 │ │ } │
[rank7]: [rank7]: [rank7]: │ 697 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 665 in bench │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 662 │ │ │ │ stream=stream, │
[rank7]: [rank7]: [rank7]: │ 663 │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 664 │ │ │
[rank7]: [rank7]: [rank7]: │ ❱ 665 │ │ return do_bench_gpu(kernel_call, rep=40, fast_flush=True) │
[rank7]: [rank7]: [rank7]: │ 666 │ │
[rank7]: [rank7]: [rank7]: │ 667 │ def clone_args(self, *args, **kwargs) -> Tuple[List[Any], Dict[st │
[rank7]: [rank7]: [rank7]: │ 668 │ │ from ..compile_fx import clone_preserve_strides │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/runtime_utils.py:113 │
[rank7]: [rank7]: [rank7]: │ in do_bench_gpu │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 110 │ │
[rank7]: [rank7]: [rank7]: │ 111 │ if quantile_field_name not in kwargs: │
[rank7]: [rank7]: [rank7]: │ 112 │ │ kwargs[quantile_field_name] = (0.5, 0.2, 0.8) │
[rank7]: [rank7]: [rank7]: │ ❱ 113 │ return triton_do_bench(*args, **kwargs)[0] │
[rank7]: [rank7]: [rank7]: │ 114 │
[rank7]: [rank7]: [rank7]: │ 115 │
[rank7]: [rank7]: [rank7]: │ 116 def do_bench_cpu(fn, warmup=5, times=20): │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/triton/testing.py:103 in do_bench │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 100 │ assert return_mode in ["min", "max", "mean", "median"] │
[rank7]: [rank7]: [rank7]: │ 101 │ import torch │
[rank7]: [rank7]: [rank7]: │ 102 │ │
[rank7]: [rank7]: [rank7]: │ ❱ 103 │ fn() │
[rank7]: [rank7]: [rank7]: │ 104 │ torch.cuda.synchronize() │
[rank7]: [rank7]: [rank7]: │ 105 │ │
[rank7]: [rank7]: [rank7]: │ 106 │ # We maintain a buffer of 256 MB that we clear │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 657 in kernel_call │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 654 │ │ │ │ │ {**dict(zip(self.arg_names, args)), **launcher.co │
[rank7]: [rank7]: [rank7]: │ 655 │ │ │ │ ) │
[rank7]: [rank7]: [rank7]: │ 656 │ │ │ │
[rank7]: [rank7]: [rank7]: │ ❱ 657 │ │ │ cloned_args, cloned_kwargs = self.clone_args(*args, **kwa │
[rank7]: [rank7]: [rank7]: │ 658 │ │ │ launcher( │
[rank7]: [rank7]: [rank7]: │ 659 │ │ │ │ *cloned_args, │
[rank7]: [rank7]: [rank7]: │ 660 │ │ │ │ **cloned_kwargs, │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/runtime/triton_heuristics.py: │
[rank7]: [rank7]: [rank7]: │ 677 in clone_args │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 674 │ │ for i, arg in enumerate(args): │
[rank7]: [rank7]: [rank7]: │ 675 │ │ │ if self.fn.arg_names[i] in self.mutated_arg_names: │
[rank7]: [rank7]: [rank7]: │ 676 │ │ │ │ assert isinstance(arg, torch.Tensor) │
[rank7]: [rank7]: [rank7]: │ ❱ 677 │ │ │ │ cloned_args.append(clone_preserve_strides(arg)) │
[rank7]: [rank7]: [rank7]: │ 678 │ │ │ else: │
[rank7]: [rank7]: [rank7]: │ 679 │ │ │ │ cloned_args.append(arg) │
[rank7]: [rank7]: [rank7]: │ 680 │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ /usr/lib/python3/dist-packages/torch/_inductor/compile_fx.py:889 in │
[rank7]: [rank7]: [rank7]: │ clone_preserve_strides │
[rank7]: [rank7]: [rank7]: │ │
[rank7]: [rank7]: [rank7]: │ 886 │ needed_size = ( │
[rank7]: [rank7]: [rank7]: │ 887 │ │ sum((shape - 1) * stride for shape, stride in zip(x.size(), x │
[rank7]: [rank7]: [rank7]: │ 888 │ ) │
[rank7]: [rank7]: [rank7]: │ ❱ 889 │ buffer = torch.as_strided(x, (needed_size,), (1,)).clone() │
[rank7]: [rank7]: [rank7]: │ 890 │ return torch.as_strided(buffer, x.size(), x.stride()) │
[rank7]: [rank7]: [rank7]: │ 891 │
[rank7]: [rank7]: [rank7]: │ 892 │
[rank7]: [rank7]: [rank7]: ╰──────────────────────────────────────────────────────────────────────────────╯
[rank7]: [rank7]: [rank7]: RuntimeError: setStorage: sizes [212553216], strides [1], storage offset 0, and
[rank7]: [rank7]: [rank7]: itemsize 2 requiring a storage size of 425106432 are out of bounds for storage
[rank7]: [rank7]: [rank7]: of size 0
[rank7]:[W1029 15:44:28.411663930 Functional.cpp:45] Warning: At the time of process termination, there are still 4 unwaited c10d_functional collective calls. Please review your program to ensure c10d_functional.wait_tensor() is invoked on all tensors returned from c10d_functional collective ops before they are used. (function ~WorkRegistry)
```
### Minified repro
Architecture is simple MosaicBERT architecture
_No response_
### Versions
2.4.1 pip version ran through composer
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @ezyang @penguinwu | oncall: distributed,triaged,module: fsdp,oncall: pt2,pt2d-triage-nov2024 | low | Critical |
2,624,521,550 | langchain | DOC: DuckDuckGo community tool docs describing an `output_format` option that is not on the published referenced version | ### URL
https://python.langchain.com/api_reference/community/tools/langchain_community.tools.ddg_search.tool.DuckDuckGoSearchResults.html#langchain_community.tools.ddg_search.tool.DuckDuckGoSearchResults.output_format
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The [referenced documentation](https://python.langchain.com/api_reference/community/tools/langchain_community.tools.ddg_search.tool.DuckDuckGoSearchResults.html#langchain_community.tools.ddg_search.tool.DuckDuckGoSearchResults.output_format) shows an `output_format` option, that is only available on the latest `master` code, however, the documentation for version `0.3.3` states that it's available on that version.
The code on the `0.3.3` tag doesn't include this option https://github.com/langchain-ai/langchain/blob/langchain-community%3D%3D0.3.3/libs/community/langchain_community/tools/ddg_search/tool.py#L77.
### Idea or request for content:
It seems like the documentation and published dependency are not in sync; publishing a new version and updating the documentation accordingly should fix the issue.
For a deeper fix, there might be some deployment synchronization that needs to take place to prevent the issue. | 🤖:docs | low | Minor |
2,624,531,791 | angular | Trigger computed signals on mutated value using custom equality | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
Hello to the awesome Angular team 😍
Since js still supports mutating arrays with simple means such as push() etc :)... And it's still a common practice in a lot of codebases...
I hoped that pushing a new item to an array inside a signal, would trigger the computed signal.
Obviously I see now that since the reference of the array object stays the same,
I have to use .slice() or some other means to create a new array instance after every signal.update(...).
While that is a solution, i hoped that using the custom equality option i could equate the actual array content or length before and after the change,
thus making the code cleaner, even though the framework would need to work a little harder to find out if its the same or not...
Problem is, I found out that if I only mutate the array and not create a new instance (ie items.push(...); return items;) ,
the custom equal function gets the same object as as the "a" and "b" parameters...
Apparently the framework didn't save an old copy of the signal value before the change, and there is no way to know that it's changed.
I realize this could make things more complicated and might not make sense performance-wise,
but i decided to raise the issue for your consideration, since it could make sense in certain applications, and since Signals are still being in dev preview...
I put up a stackblitz reproducing this issue.
Thanks a lot!!!
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-ta7t5b?file=src%2Fmain.ts
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.2.7
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 18.2.7
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.7
@angular-devkit/build-angular 18.2.7
@angular-devkit/core 18.2.7
@angular-devkit/schematics 18.2.7
@schematics/angular 18.2.7
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
### Anything else?
_No response_ | area: core,core: reactivity,cross-cutting: signals | low | Critical |
2,624,533,467 | ollama | Docs: Add Linux manual instructions that can run without root / sudo | ## Description
In the [Linux manual install instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install), the commands are shown requiring `sudo` access. This is usually fine with personal machines, but often isn't for shared or managed machines. The request here is to add instructions on how to install without `root` access.
## Proposal
I'm happy to contribute the README PR. Here are my suggested changes:
---
## Manual install
### Download and extract the package:
```shell
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
```
### Non-root install
For those installing without root/sudo access, the install can also be done in user space with the appropriate additions to `PATH` and `LD_LIBRARY_PATH`:
```shell
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
# Substitute any place you have write access for ~/.local
mkdir -p ~/.local
tar -C ~/.local -xzf ollama-linux-amd64.tgz
# Place this in your ~/.bashrc to persist
export PATH=$HOME/.local/bin:$PATH
export LD_LIBRARY_PATH=$HOME/.local/lib/ollama:$LD_LIBRARY_PATH
```
### Start Ollama:
```shell
ollama serve
```
In another terminal, verify that Ollama is running:
```shell
ollama -v
``` | feature request | low | Major |
2,624,534,573 | ui | Using force install or peer deps is unacceptable and will break a lot of production environments in the future | ### Describe the bug
Running force install is a terrible idea, even knowing that this installation works, I'm installing that in production, and you will produce a lot of issues in production environment in future by being lazy. In facts if you deps are outdated you don't support Next 15
https://github.com/shadcn-ui/ui/issues/5637
### Affected component/components
toast,radio-group,pagination,navigation-menu,drowdown-menu,dialog,command
### How to reproduce
https://github.com/shadcn-ui/ui/issues/5637
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/busy-breeze-go8s7s
### Logs
```bash
https://github.com/shadcn-ui/ui/issues/5637
```
### System Info
```bash
https://github.com/shadcn-ui/ui/issues/5637
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,624,534,576 | vscode | Default word wrap setting is no longer applied to open files when VS Code is relaunched |
Type: <b>Bug</b>
I am using the default word wrap settings, which wraps the lines of Markdown files when I open them during a session. However if I leave a Markdown file open when I close Code, its lines are no longer wrapped when I relaunch the application. To wrap them, I either have to type `Alt + z` or close and reopen the file.
Steps to reproduce:
1. Open [Jack.md](https://github.com/user-attachments/files/17575457/Jack.md) in Code and observe that the long line is wrapped.
2. Close code and relaunch it. Ensure that the file is automatically reopened -- do not manually reopen it.
3. Observe that the long line is no longer wrapped.
This occurs with all extensions disabled. I tried to reproduce it in the latest Insiders build, but I couldn't work out how to get it to reopen files that were open in the last session.
To be clear, this is not a duplicate of https://github.com/microsoft/vscode/issues/103199 . That issue is a request for the preservation of custom word wrap settings per file, whereas I am pointing out that Code doesn't always honour the default settings.
VS Code version: Code 1.95.0 (912bb683695358a54ae0c670461738984cbb5b95, 2024-10-28T20:16:24.561Z)
OS version: Linux x64 5.15.0-124-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-1035G4 CPU @ 1.10GHz (8 x 1219)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 1, 1|
|Memory (System)|7.45GB (4.33GB free)|
|Process Argv|--crash-reporter-id a5f80cc2-1946-4a5a-b5a0-95b3d7c0f4dc|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|unity|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|unity|
|XDG_SESSION_TYPE|x11|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
da93g388:31013173
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
iacca2:31156134
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
```
</details>
<!-- generated by issue reporter --> | bug,editor-wrapping | low | Critical |
2,624,537,783 | godot | OptionButton dropdown not properly inheriting scale of parent CanvasLayer in 4.3 | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Apple - Apple M1 Pro Godot Engine v4.3.stable.official.77dcf97d8 API 4.1 Metal
### Issue description
This is a duplicate of issue #94247 only that I am able to reproduce the same bug in v4.3
### Steps to reproduce
Increase or reduce the scale of the parent Control node, the OptionButton dropdown won't inherit the scale.


### Minimal reproduction project (MRP)
[bug-report.zip](https://github.com/user-attachments/files/17575562/bug-report.zip) | bug,topic:gui | low | Critical |
2,624,557,379 | pytorch | Using xpu module in the cuda version Pytorch | ### 🚀 The feature, motivation and pitch
I am working on building a demo that using NV GPU as a comparison with intel XPU.
Additionally, I wonder if it's possible to distribute part of the computations in some tasks to the XPU while using NV GPU.
Therefore, I would like to ask if there are plans to merge the intel /test/xpu branch with the cuda branch?
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere @ptrblck @msaroufim @gujinghui @EikanWang @fengyuan14 @guangyey | module: build,module: cuda,triaged,enhancement,module: xpu | medium | Major |
2,624,559,989 | godot | Anonymous lambdas can be randomly assigned a new context object if the original context object is missing at the time of invocation | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6590) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads)
### Issue description
When you make an anonymous lambda that contains a reference to a local variable or function in the current script where the lambda is defined, and then this lambda is later called when the current node is missing, Godot can (unpredictably) bind this lambda to a completely different node.
In the case that the new node does not have the same members, it will throw an error like in the screenshot below, which is super confusing. But an even more dangerous case I have encountered is if the new node *does* share the same members being referenced in the lambda (i.e. `position` or `name`). In this case, the lambda will silently proceed with the wrong target node as though nothing is wrong, leading to a silent bug that can be almost impossible to diagnose.
In the screenshot below, the `story.gd` script attached to the `story` node is where the error occurs. The lambda being constructed contains a call to a function `_perform_place_transition` which is local to `story.gd`. However, at runtime, the lambda construction fails. The error message and the `self` field in the debugger window both say that the error is occuring on a completely different node in the scene - the `tf_mirror` node under the right eye of a penguin.
Godot will randomly re-bind the lambda to one of a few other nodes, or it will not re-bind at all. There appears to be no pattern as to how it chooses how/whether to re-bind.

The problem line of code:
```Game.loader.queue_load_op(func(): await _perform_place_transition(place_name, spawn_point_name))```
The rebinding bug *still occurs* if the lambda does not contain an `await`:
```Game.loader.queue_load_op(func(): _perform_place_transition(place_name, spawn_point_name))```
The rebinding bug *does NOT occur* if an anonymous lambda is not used:
```Game.loader.queue_load_op(_perform_place_transition.bind(place_name, spawn_point_name))```
Just to be clear:
It is a user bug to allow an anonymous lambda to be invoked on a no-longer-valid object. The bug being reported here is the random re-binding being done by Godot.
### Steps to reproduce
I can't share my entire commercial project, and I have attempted but not succeeded in making a reduced or fresh repro project.
### Minimal reproduction project (MRP)
N/A | bug,topic:gdscript | low | Critical |
2,624,568,739 | langchain | BUG: langchain_anthropic tool use cannot run because of the chat_models.py in langchain_anthropic has problem with "args": event.delta.partial_json, and "stop_reason": event.delta.stop_reason | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from dotenv import load_dotenv
from langchain import hub
from langchain.agents import (
AgentExecutor,
create_react_agent,
)
from langchain_core.tools import Tool
# Load environment variables from .env file
load_dotenv()
# Define a very simple tool function that returns the current time
def get_current_time(*args, **kwargs):
"""Returns the current time in H:MM AM/PM format."""
import datetime # Import datetime module to get current time
now = datetime.datetime.now() # Get current time
return now.strftime("%I:%M %p") # Format time in H:MM AM/PM format
def get_user_name(*args, **kwargs):
return "laios"
# List of tools available to the agent
tools = [
Tool(
name="Time", # Name of the tool
func=get_current_time, # Function that the tool will execute
# Description of the tool
description="Useful for when you need to know the current time",
),
]
# Pull the prompt template from the hub
# ReAct = Reason and Action
# https://smith.langchain.com/hub/hwchase17/react
prompt = hub.pull("hwchase17/react")
# Initialize a ChatAnthropic model
from langchain_anthropic import ChatAnthropic
ANTHROPIC_API_KEY="..."
model="claude-3-5-sonnet-20241022"
llm = ChatAnthropic(model=model , api_key = ANTHROPIC_API_KEY)
# Create the ReAct agent using the create_react_agent function
agent = create_react_agent(
llm=llm,
tools=tools,
prompt=prompt,
stop_sequence=True,
)
# Create an agent executor from the agent and tools
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
)
# Run the agent with a test query
response = agent_executor.invoke({"input": "What time is it?"})
# Print the response from the agent
print("response:", response)
“”“
can not pass the simple tool use demo of agent_executor in langchain_anthropic .
the chat_models.py in langchain_anthropic has problem with "args": event.delta.partial_json, and "stop_reason": event.delta.stop_reason.
code 1223 and code 1235.
”“”
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\5_agents_and_tools\1_agent_and_tools_basics.py", line 68, in <module>
response = agent_executor.invoke({"input": "What time is it?"})
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\chains\base.py", line 170, in invoke
raise e
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\chains\base.py", line 160, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\agents\agent.py", line 1629, in _call
next_step_output = self._take_next_step(
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\agents\agent.py", line 1335, in _take_next_step
[
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\agents\agent.py", line 1335, in <listcomp>
[
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\agents\agent.py", line 1363, in _iter_next_step
output = self._action_agent.plan(
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain\agents\agent.py", line 464, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 3407, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 3394, in transform
yield from self._transform_stream_with_config(
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 2197, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 3357, in _transform
yield from final_pipeline
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 1413, in transform
for ichunk in input:
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 5561, in transform
yield from self.bound.transform(
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\runnables\base.py", line 1431, in transform
yield from self.stream(final, config, **kwargs)
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\language_models\chat_models.py", line 420, in stream
raise e
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_core\language_models\chat_models.py", line 400, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_anthropic\chat_models.py", line 716, in _stream
msg = _make_message_chunk_from_anthropic_event(
File "D:\.workspace\web3\0camp\hackston\ai\tutorial\langchain-crash-course\langchain-demo\lib\site-packages\langchain_anthropic\chat_models.py", line 1235, in _make_message_chunk_from_anthropic_event
"stop_reason": event.delta.stop_reason,
AttributeError: 'NoneType' object has no attribute 'stop_reason'
```
### Description
can not pass the simple tool use demo of agent_executor in langchain_anthropic.
the chat_models.py in langchain_anthropic has problem with "args": event.delta.partial_json, and "stop_reason": event.delta.stop_reason.
code 1223 and code 1235.
when you called this code
response = agent_executor.invoke({"input": "What time is it?"})
finally they will goto the _make_message_chunk_from_anthropic_event() in models.py and event.delta.partial_json donot have partial_json ;and event.delta is None.
please check it.
### System Info
win10
pycharm
requirements.txt
langchain-openai>=0.2.2
langchain_ollama>=0.2.0
python-dotenv>=1.0.1
langchain>=0.3.0
langchain-community>=0.3.0
langchain-anthropic>=0.2.0
langchain-google-genai>=1.1.0
langchain-google-firestore>=0.3.1
firestore>=0.0.8
chromadb>=0.5.15
tiktoken>=0.8.0
sentence-transformers>=3.1.0
bs4>=0.0.2
firecrawl-py>=0.0.14
langchainhub>=0.1.21
wikipedia>=1.4.0
tavily-python>=0.3.4 | investigate | low | Critical |
2,624,650,114 | PowerToys | Issue with Power Toys Run | ### Microsoft PowerToys version
v0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Before update power toys run was working with both alt keys-- " left alt key + space " and " right alt key + space " , but after update now it is only working with left alt kay . I would like you to solve this issue or bug. It was more convenient and efficient while using PC with one hand.
Power Toys Run is very useful feature and it makes very easy to search for file and any application.
### ✔️ Expected Behavior
Before update power toys run was working with both alt keys-- " left alt key + space " and " right alt key + space " , but after update now it is only working with left alt kay . I would like you to solve this issue or bug. It was more convenient and efficient while using PC with one hand.
Power Toys Run is very useful feature and it makes very easy to search for file and any application.
### ❌ Actual Behavior
Before update power toys run was working with both alt keys-- " left alt key + space " and " right alt key + space " , but after update now it is only working with left alt kay . I would like you to solve this issue or bug. It was more convenient and efficient while using PC with one hand.
Power Toys Run is very useful feature and it makes very easy to search for file and any application.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,624,695,983 | rust | impl Trait in return position implicitly captures non-`'static` type parameter despite `+ 'static` | ### Code
```rust
trait MyTrait {}
struct MyStruct;
impl MyTrait for MyStruct {}
trait DeserTrait<'de> {}
struct DeserStruct;
impl<'de> DeserTrait<'de> for &'de DeserStruct {}
pub fn testfn<'de, D: DeserTrait<'de>>(_deserializer: D) -> impl MyTrait + 'static {
MyStruct
}
fn test() -> impl Send {
let deserializer = DeserStruct;
let graph = testfn(&deserializer);
graph
}
```
([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=90ce6dd9d1e329daf28feda317ba93ae))
### Current output
```
error[E0597]: `deserializer` does not live long enough
--> src/lib.rs:15:24
|
14 | let deserializer = DeserStruct;
| ------------ binding `deserializer` declared here
15 | let graph = testfn(&deserializer);
| -------^^^^^^^^^^^^^-
| | |
| | borrowed value does not live long enough
| argument requires that `deserializer` is borrowed for `'static`
16 | graph
17 | }
| - `deserializer` dropped here while still borrowed
```
### Desired output
I'm not sure, but adding `+ use<>` to the function's return type explains what the issue is actually that `impl Trait` implicitly captured the trait bound:
```
error: `impl Trait` must mention all type parameters in scope in `use<...>`
--> src/lib.rs:9:61
|
9 | pub fn testfn<'de, D: DeserTrait<'de>>(_deserializer: D) -> impl MyTrait + use<> + 'static {
| - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| type parameter is implicitly captured by this `impl Trait`
|
= note: currently, all type parameters are required to be mentioned in the precise captures list
```
### Rationale and extra context
It's impossible to debug (or google) otherwise, and I only figured it out because I remembered `+ use<>` from the stabilization blog post
### Other cases
_No response_
### Rust Version
rustc 1.82.0
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,624,752,506 | pytorch | `torch.package` warning -- `TypedStorage` is deprecated | ### 🐛 Describe the bug
With torch `2.5.1+cu124`, just using `package_importer` and `package_exporter` throws the warning.
```
/home/cwtan/anaconda3/envs/test_env/lib/python3.11/site-packages/torch/package/package_importer.py:262: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
dtype = storage_type.dtype
```
and
```
/home/cwtan/anaconda3/envs/test_env/lib/python3.11/site-packages/torch/package/package_exporter.py:921: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage_type_str = obj.pickle_storage_type()
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A1000 6GB Laptop GPU
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.1
[pip3] torchmetrics==1.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchmetrics 1.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
``` | oncall: package/deploy | low | Critical |
2,624,761,698 | pytorch | CUDNN sdp attention causes loss explosion | ### 🐛 Describe the bug
We observed a NaN regression with 2.5.0, and traced it to CUDNN attention.
2.5.0:

After adding `torch.backends.cuda.enable_cudnn_sdp(False)`, it no longer explodes:

2.4.0 also survives for 300k steps.
Further details:
* we use aspect ratio bucketing, so we will be exercising a variety of shapes and strides
* we don’t use torch compile
* float16 precision
I understand that this bug report is non-actionable without a repro.
This issue will be partially mitigated by https://github.com/pytorch/pytorch/pull/138522, which makes CUDNN attention opt-in.
### Versions
```
PyTorch version: 2.5.0-rc10
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/AOS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: TrueCPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affectedVersions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.5.0rc10
[pip3] torchaudio==2.5.0rc4
[pip3] torchdiffeq==0.2.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0rc6
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @csarofeen @ptrblck @xwang233 @eqy @drisspg @mikaylagawarecki @jbschlosser @bhosmer @cpuhrsch @erichan1 | high priority,module: cudnn,module: cuda,triaged,module: sdpa | medium | Critical |
2,624,765,296 | TypeScript | Strange 2488 error involving overload resolution and this parameter type | ### 🔎 Search Terms
yield ts2488 never iterator
### 🕗 Version & Regression Information
- This changed between versions 3.5.1 and 3.6.2
every-ts bisect gives: e8bf9584aa74aabfecb51a02edb13e3657508274 is the first bad commit
### ⏯ Playground Link
https://www.typescriptlang.org/play/?module=1&ts=5.7.0-dev.20241030#code/CYUwxgNghgTiAEAzArgOzAFwJYHtVNQB4AVAPgAoA3KCZEALnnIEp4BeU+Y5xgbyZ5d4AXwDcAKFCRYCFOmx4CJCtVoMuggJIYQMKACMIIQmgDWqHAHdUpCeJAAPAA44YGeNIDOn+AEF4vOLwwfBQ7ATkcpi4qABUTBgAFliejL6sgSFZ8ACeWCAQwPGIqOQAjMwSWcKV4kEh+uElkWjRePHkSSlpGfXZHnie7joAtk5NpRVV-bn5hfGjTtPBNRLC4kA
### 💻 Code
```ts
declare function fn<T>(value: () => T): { (): T };
declare function fn<T>(value: T): Iterable<unknown>;
export class A {
a = fn(function* (this: A) {
yield* fn(1); // error
});
b = fn(function* (this: A) {
const temp = fn(1); // ok
yield* temp;
});
}
```
### 🙁 Actual behavior
```text
src/error.ts:6:16 - error TS2488: Type 'never' must have a '[Symbol.iterator]()' method that returns an iterator.
6 yield* fn(1);
~~~~~
```
### 🙂 Expected behavior
No error, as in `A.b`
### Additional information about the issue
_No response_ | Needs Investigation | low | Critical |
2,624,856,285 | TypeScript | auto-imports (import suggestions) randomly stopped working | Type: <b>Bug</b>
A little while ago (like over a week now), I was working as I always do, and all of the auto-import suggestions for typescript just stopped working completely. I hadn't updated _anything_ manually (my os, vscode, extensions, node modules, etc). It is possible something auto-updated somewhere behind the scenes, but I haven't been able to track it down.
Essentially no auto-imports work at all in my project anymore. I can't import from relative pathed files, workspace files, or node modules. All typescript imports, once imported work fine and are recognized, but any of the "quick fix" suggestions just don't work at all, both as I type or hovering over a missing reference. The quick fix popup does show, but the only option is my tabnine extension.
What is strange is the imports have worked _once_ since they stopped. I did nothing out of the ordinary and they just randomly started working again. But then as quickly as they appeared, they disappeared.
I've tried all of the following:
- changing ts versions
- changing between vscode and project ts references
- changing my pnpm version
- turning on and off every one of my extensions
- changing memory and storage size limits on wsl
- trying to initialize the project in various ways (docker first, complete rebuilds, etc)
- nearly every possible setting related to ts and tsserver in the vscode settings
- nearly every possible intellisense setting I could find that might be related
- restarting vscode and ts server in various ways
- changing any settings related to docker, workspaces, or anything else that might affect my ts path lookups
The only other piece of information I have is this error in the ts logs, but I've tried tracking it down, even manually modifying the ts server code and logging things out for a couple hours, with no obvious things that stand out on why it is failing. This error appears when I try to hover over a known missing reference to have it show the imports, but then it just doesn't show them, presumably due to this error (some path parts redacted for privacy reasons).
```
Info 4708 [11:56:52.468] request:
{
"seq": 19,
"type": "request",
"command": "quickinfo",
"arguments": {
"file": "/<projectPath>/apps/main-app/src/client/components/MyComponent.tsx",,
"line": 42,
"offset": 22
}
}
Info 4712 [11:56:52.468] request:
{
"seq": 20,
"type": "request",
"command": "getCodeFixes",
"arguments": {
"file": "/<projectPath>/apps/main-app/src/client/components/MyComponent.tsx",
"startLine": 42,
"startOffset": 18,
"endLine": 42,
"endOffset": 25,
"errorCodes": [
2304
]
}
}
Err 4713 [11:56:52.550] Exception on executing command {
"seq": 20,
"type": "request",
"command": "getCodeFixes",
"arguments": {
"file": "/<projectPath>/apps/main-app/src/client/components/MyComponent.tsx",
"startLine": 42,
"startOffset": 18,
"endLine": 42,
"endOffset": 25,
"errorCodes": [
2304
],
"startPosition": 1103,
"endPosition": 1110
}
}:
Debug Failure.
Error: Debug Failure.
at Object.getImmediateAliasedSymbol (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:74339:22)
at getDefaultExportInfoWorker (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:135913:29)
at getDefaultExportInfoWorker (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:135915:14)
at getDefaultLikeExportInfo (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:135891:16)
at /<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:152437:25
at /<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:135751:119
at forEachExternalModule (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:135812:7)
at forEachExternalModuleToImportFrom (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:135751:3)
at getExportInfos (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:152433:3)
at /<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:152366:24
at flatMap (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:2609:17)
at getFixesInfoForNonUMDImport (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:152360:10)
at getFixInfos (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:152206:12)
at Object.getCodeActions (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:151577:18)
at /<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:148588:46
at flatMap (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:2609:17)
at Object.getFixes (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:148588:10)
at /<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:146859:33
at flatMap (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:2609:17)
at Object.getCodeFixesAtPosition (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:146857:12)
at IpcIOSession.getCodeFixes (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:187265:50)
at getCodeFixes (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:185371:43)
at /<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:187569:69
at IpcIOSession.executeWithRequestId (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:187561:14)
at IpcIOSession.executeCommand (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:187569:29)
at IpcIOSession.onMessage (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:187611:51)
at process.<anonymous> (/<projectPath>/node_modules/.pnpm/typescript@5.4.5/node_modules/typescript/lib/tsserver.js:189220:14)
at process.emit (node:events:518:28)
at emit (node:internal/child_process:951:14)
at process.processTicksAndRejections (node:internal/process/task_queues:83:21)
```
From what I can conclude, this _seems_ like either a timing issue withing vscode starting things up, or some internal vscode or extension thing somewhere broke behind the scenes. I'm on a wsl docker setup, so there is a lot of linking going on behind the scenes, but given I hadn't changed _anything_ when this started happening, I'm at a complete loss on what to even look for further at this point.
Oh, and unfortunately, I haven't been able to reproduce this on any other machine, so I'm a bit stuck on that front. I'm happy to try modifying any settings suggested, I just have no idea what else to look for at this point.
VS Code version: Code 1.90.0 (89de5a8d4d6205e5b11647eb6a74844ca23d2573, 2024-06-04T19:33:54.889Z)
OS version: Windows_NT x64 10.0.22635
Modes:
Remote OS version: Linux x64 5.15.133.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 x 2808)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled|
|Load (avg)|undefined|
|Memory (System)|31.89GB (6.29GB free)|
|Process Argv|--crash-reporter-id 10d747f9-1beb-40be-859f-a1379ddf2d04|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu|
|OS|Linux x64 5.15.133.1-microsoft-standard-WSL2|
|CPUs|Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 x 0)|
|Memory (System)|17.57GB (11.70GB free)|
|VM|0%|
</details><details><summary>Extensions (32)</summary>
Extension|Author (truncated)|Version
---|---|---
remote-containers|ms-|0.369.0
remote-ssh|ms-|0.110.1
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.2
vscode-remote-extensionpack|ms-|0.25.0
atom-keybindings|ms-|3.3.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.1
reopenclosedtab|uyi|1.1.0
material-theme|zhu|3.17.2
vscode-zipfs|arc|3.0.0
vscode-tailwindcss|bra|0.10.5
vscode-better-align|cho|1.4.2
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.9.0
vscode-css-modules|cli|0.5.1
vscode-eslint|dba|2.4.4
githistory|don|0.6.20
gitlens|eam|15.1.0
vscode-search-open-all-results|fab|2.0.2
vscode-pull-request-github|Git|0.88.1
vscode-duplicate|mrm|1.2.1
vscode-docker|ms-|1.29.1
atom-keybindings|ms-|3.3.0
vsliveshare|ms-|1.0.5918
color-highlight|nau|2.8.0
gremlins|nho|0.26.0
vscode-stylelint|sty|1.4.0
tabnine-vscode|Tab|3.108.0
reopenclosedtab|uyi|1.1.0
vscode-fold-level|vik|0.0.14
vscode-svg-previewer|vit|0.7.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
9b8hh234:30694863
pythongtdpath:30769146
welcomedialog:30910333
pythonidxpt:30866567
pythonnoceb:30805159
asynctok:30898717
pythontestfixt:30902429
pythonregdiag2:30936856
pythonmypyd1:30879173
pythoncet0:30885854
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
jchc7451:31067544
chatpanelc:31048052
dsvsc021:30996838
9c06g630:31013171
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
pythonprt:31056678
dwnewjupyter:31046869
26j00206:31048877
```
</details>
<!-- generated by issue reporter --> | Needs More Info | low | Critical |
2,624,860,646 | pytorch | [ONNX][RFC] Migrate torchlib from onnxscript | Now that the torchlib translation library is close to stable in onnxscript, it is ready to be migrated into PyTorch so that
1. It can evolve with aten operators without having to worry about backward compatibility for different PyTorch versions
2. We can use newer opsets by default, again without having to worry about BC. The proposal is upgrade to use opset 22 for torch 2.6 or torch 2.7. This makes easier for us and users to leverage new dtypes and the corrected GroupNormalization.
## Considerations
1. We should still allow implementations in onnxscript to override those in PyTorch so we can continue to add new implementations after a torch version is released. This will be part of the stable framework apis in onnxscript.
2. We can update the default opset version for each torch version.
3. The opinfo unit tests should be migrated as well. | module: onnx,triaged | low | Minor |
2,624,864,266 | three.js | Potential security issues in GitHub Actions workflows | # Potential security issues in GitHub Actions workflows
Hi! We are a research team from Radboud University in the Netherlands, currently working on security vulnerability analysis on GitHub Actions workflows. During our study, we found some potential issues in the workflow files of your repository and would like to bring them to your attention to help enhance security.
## Detailed Findings:
Please find the detected potential security issues below:
## Issues Analysis:
- **No job permissions specified**
This is a risk because you are not setting `permissions` key in your job to modify the default permissions granted to the `GITHUB_TOKEN` for this job, limiting access to only the minimum required level. We recommend adhering to the principle of least privilege by setting `permissions` key to declare only necessary permissions at both workflow and job levels. Check [details](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#permissions).
## Feedback Request:
We greatly appreciate your attention to this matter. If you are willing to provide feedback, please consider completing a brief anonymous survey (google form): [Developer Perspectives on GitHub Actions workflow Security](https://forms.gle/McpgwRaqLRkKrGeW6), which will take around 3 minutes. Your feedback is invaluable in helping us gain insights on how to improve the security of the GitHub ecosystem.
**Thank you!** | Suggestion | low | Minor |
2,624,889,496 | go | cmd/go,x/telemetry/config: collect the set of OS versions on which the go command is run | When deciding to drop support for an OS version (e.g. #69839) it is useful to know how many people still use it. We can collect this information through telemetry. I propose that we do.
@findleyr | NeedsInvestigation,GoCommand,Telemetry-Proposal,Telemetry-Accepted | low | Major |
2,624,935,509 | opencv | MatchTemplate with mask throws for oddly specific template image sizes / method combination | ### System Information
OpenCV version 4.8.1
OS: Windows 11
python version: 3.11.6
### Detailed description
When using the masked version of matchTemplate, having ALL of the following conditions at the same time
- providing a mask
- using method cv.TM_CCOEFF or cv.TM_CCOEFF_NORM
- the template size is exactly (im_height - 3, im_width)
causes the following exception:
````
cv2.error: OpenCV(4.8.1) C:\Work\lib_sources\x64\opencv\modules\core\src\arithm.cpp:674: error: (-5:Bad argument) When the input arrays in add/subtract/multiply/divide functions have different types, the output array type must be explicitly specified in function 'cv::arithm_op'
````
Seems unrelated to the image /template/mask content (tested with random images, see below, but also occured in actual template matching scenario where other metrics/image size combinations work fine.)
Tested with various image/template sizes, all available methods, no mask/ with mask. See code below. Also tested with np.uint8/no.float32 dtypes (not shown).
### Steps to reproduce
````
import itertools
from collections import Counter
from enum import Enum, member
from typing import NamedTuple
import cv2 as cv
import numpy as np
class TemplateMatchModes(Enum):
TM_SQDIFF = cv.TM_SQDIFF
TM_SQDIFF_NORMED = cv.TM_SQDIFF_NORMED
TM_CCORR = cv.TM_CCORR
TM_CCORR_NORMED = cv.TM_CCORR_NORMED
TM_CCOEFF = cv.TM_CCOEFF
TM_CCOEFF_NORMED = cv.TM_CCOEFF_NORMED
def _random_im(shape: tuple[int, ...]) -> np.ndarray:
return np.random.randint(low=0, high=255, size=shape, dtype=np.uint8)
def _random_mask_like(im: np.ndarray) -> np.ndarray:
return np.random.randint(low=0, high=1, size=im.shape, dtype=np.uint8)
class MaskGenerator(Enum):
ONES = member(np.ones_like)
RANDOM = member(_random_mask_like)
NONE = member(lambda x: None)
class FailureCause(NamedTuple):
delta_size: tuple[int, int]
mask_gen: str
method: str
if __name__ == "__main__":
failure_counter = Counter()
n_image_sizes = 0
for w, h in itertools.product(range(10, 100, 10), repeat=2):
n_image_sizes += 1
im_size = (h, w)
im = _random_im(im_size)
for dw, dh in itertools.product(range(-5, 1, 1), repeat=2):
template_size = (h + dh, w + dw)
template = _random_im(template_size)
for mask_generator in MaskGenerator:
mask = mask_generator.value(template)
for method in TemplateMatchModes:
try:
res = cv.matchTemplate(
image=im,
templ=template,
method=method.value,
mask=mask,
)
except Exception:
failure_cause = FailureCause(
delta_size=(dh, dw), mask_gen=mask_generator.name, method=method.name
)
failure_counter[failure_cause] += 1
print(f"{n_image_sizes=}")
for item in failure_counter.items():
print(item)
````
Reuslts in the following output
```
n_repeated_cases=81
(FailureCause(delta_size=(-3, 0), mask_gen='ONES', method='TM_CCOEFF'), 81)
(FailureCause(delta_size=(-3, 0), mask_gen='ONES', method='TM_CCOEFF_NORMED'), 81)
(FailureCause(delta_size=(-3, 0), mask_gen='RANDOM', method='TM_CCOEFF'), 81)
(FailureCause(delta_size=(-3, 0), mask_gen='RANDOM', method='TM_CCOEFF_NORMED'), 81)
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: imgproc | low | Critical |
2,624,945,810 | go | iter:iter: TestPull2/3 failures | ```
#!watchflakes
default <- pkg == "iter:iter" && test == "TestPull2/3"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735173623505772561)):
=== RUN TestPull2/3
pull_test.go:108: have -1 extra goroutines, want 0
pull_test.go:108: have -1 extra goroutines, want 0
pull_test.go:110: have -1 extra goroutines, want 0
pull_test.go:115: have -1 extra goroutines, want 0
--- FAIL: TestPull2/3 (0.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,624,954,598 | rust | Document `compile_fail,E1234` syntax for marking compile_fail tests as failing with a particular error |
It's possible to mark tests as failing with a particular error. This largely seems to be used [within rustc itself](https://github.com/search?q=repo%3Arust-lang%2Frust%20compile_fail%2CE&type=code).
`````rust
/// ```compile_fail,E0425
/// let x = x;
/// ```
`````
Ought we document this in the rustdoc book somewhere? It currently [does not appear to be](https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html). | T-rustdoc,A-docs,A-doctests | low | Critical |
2,624,957,676 | vscode | vscode.dev asking for Settings Sync sign in again after enabling Cloud Changes | From @sandy081 originally
- Opened a new insiders.vscode.dev window enabled settings sync
- Opened a remote github repo in that window and turned on cloud changes feature
- Opened new insiders.vscode.dev window to connect to tunnels using MSFT account.
🐛 I was asked to sign in to sync settings
- In the first window, I connected to azure repo
🐛 I was asked to sign in to sync settings
I can repro this. The interesting thing is that it seems like we do return a valid Settings Sync token... so I wonder if this is maybe something similar to https://github.com/microsoft/vscode/issues/229456
| bug,settings-sync,vscode.dev,microsoft-authentication | low | Minor |
2,624,962,371 | next.js | Page with ISR (generateStaticParams() and dynamicParams=true) is taking too long for not generated content | ### Link to the code that reproduces this issue
https://github.com/vojtechmares/next-isr-bug
### To Reproduce
1. Deploy Next.js in Docker to Kubernetes
2. Use ISR route with generateStaticParams() to pregenerate paths
3. Use `dynamicParams=true`
4. Try navigating to a page that has not been pregenerated (504 on Ingress-NGINX)
- Next cache (`.next/cache`) is Kubernetes "emptyDir" ephemeral volume (not persistent between deployments or different instances)
My `Dockerfile`:
```dockerfile
# syntax=docker/dockerfile:1
FROM node:22-alpine AS base
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
RUN corepack enable pnpm
COPY package.json pnpm-lock.yaml* ./
RUN pnpm install
FROM base AS builder
WORKDIR /app
RUN corepack enable pnpm
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ARG CMS_API_URL="https://cms.mareshq.com/api"
ARG CMS_API_TOKEN
# COPY .env.production.sample .env.production
RUN pnpm run build
FROM base AS runtime
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/images ./images
COPY --from=builder --chown=nextjs:nodejs /app/content ./content
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME=0.0.0.0
CMD ["node", "./server.js"]
```
### Current vs. Expected behavior
I expect the page to generate and save itself into cache for later use, instead the page is taking too long to load forcing ingress-nginx to return 504 - Gateway timeout
This is an issue only on Kubernetes, local development works as expected.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 22.8.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.12.2
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 4.9.5
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
_No response_ | bug,Runtime | low | Critical |
2,624,966,700 | deno | Ability to require a file in the module graph without permissions | Some work on this here, but not exactly done yet: https://github.com/denoland/deno/pull/26558
Maybe that involves moving stuff like `Module._findPath` to Rust. | feat,permissions | low | Minor |
2,624,978,235 | pytorch | [RFC][Pipelining] RNG state communication, avoid seed checkpoint | ### Background
During model initialization, each PP stage initializes their own portion of the model. This is preferrable over initializing the model all in one place and then transferring it to each stage, due to the high memory requirements of initializing an entire large model in one place.
Ignoring random states, each PP stage would start from the same RNG seed and each model chunk could have similar or even identical weights. This would likely lead to numerical issues during training. It also makes loss-curves of PP training less comparable with loss-curves of non-PP training, which lowers confidence in PP training and adds obstacles to debugging.
Currently, torchtitan implements a workaround where the whole model is initialized on CPU offline in one process, saved to disk, and then loaded by each pipeline stage. (Loading does not require loading the whole model into ram, only the portion used locally). This results in consistent model initialization but at a runtime and UX cost of generating and using the seed checkpoint.
### Proposal
Instead, we can manage the RNG state automatically inside torch.pipelining, to avoid identical weights.
**Option 1. Use different seeds per stage**
We could simply re-seed the RNG on each PP stage, without worrying about actual initialization parity with non-PP training.
This option is potentially faster since it allows each stage to initialize in parallel. It should also gaurantee 'safe' initializations, since each stage would have no chance of duplicate or similar init weights. However, this initialization would not 'match' the initialization values for non-PP training, leading to less comparability between loss curves.
**Option 2. Sequentially initialize**
After stage 0 initializes, it could summarize its own RNG states and send them to stage 1, which in turn loads the states and initializes, then saves new states and sends them to stage 2, etc. This results in serializing initialization, which would lead to an observable slowdown at startup time, but also matches perfectly (bitwise) with the initialization values that would be observed in a non-PP training.
This option requires more complexity:
- virtual-stages means more than one round of communication across PP ranks is needed
- V-schedules means sometimes, neighboring stages exist on the same rank and thus seed communication should be skipped for those cases
Note: seed save/restore can be implemented like this: https://gist.github.com/wconstab/6bdea055eff9a7904fdae595c6cdac6e
It might be worth implementing both options and experimenting with the tradeoffs. I suspect most users would prefer option 1 if its significantly faster, and we can prove it leads to equally accurate training.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o @tianyu-l | oncall: distributed,triaged | low | Critical |
2,624,981,668 | angular | Ability to set environment/configuration values at runtime instead of buildtime | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
Angular should provide a common and supported mechanism for the setting of configuration values at runtime (or at least start up time).
Currently, there are multiple implementations / workarounds for this problem.
**Can we get well-supported mechanism maintained by the Angular Team?**
[Multiple releases using the same, but configurable, Angular Standalone Application build](https://timdeschryver.dev/blog/multiple-releases-using-the-same-but-configurable-angular-standalone-application-build#)
[Compile-time vs. Runtime configuration of your Angular App](https://juri.dev/blog/2018/01/ng-app-runtime-config/)
[Angular, Docker & Nginx: Build once, Deploy anywhere.](https://medium.com/@fabiozuin/build-once-deploy-anywhere-with-angular-17-bf477c49668f)
https://www.npmjs.com/package/@ngx-env/builder
https://www.npmjs.com/package/angular-runtime-config
https://www.npmjs.com/package/@igorissen/angular-env-builder
There was a request:
https://github.com/angular/angular/issues/52994
ability to use/change environment files after build
to allow the setting of environment-level values at runtime instead of build time. I believe that was mistakenly marked as a duplicate of:
https://github.com/angular/angular-cli/issues/4318
Request: Method to pass environment variables during build vs file.
### Proposed solution
Include one of the existing user-created implementations described above in Angular with suitable documentation.
Or, at least, pick one of the above and link to it from the Angular documentation.
### Alternatives considered
Use one of the existing user-created implementations.
| feature,area: core,core: bootstrap,feature: votes required | medium | Major |
2,624,998,518 | ollama | Reporting for not working models, uploaded by users | Hi, can you add in your library on the side, option for reporting not working models? For example, one of them is this model:
https://ollama.com/leeplenty/lumimaid-v0.2:12b
It just spouts nonsense. | feature request,ollama.com | low | Minor |
2,625,001,311 | pytorch | torch.export torchaudio kaldi module | ### 🚀 The feature, motivation and pitch
# Preface
Hello! I am trying to export the codes for audio preprocessing on `kaldi` module in torchaudio.
I have few feature requests:
- Is there any plans to support `torch.map` to support CUDA tensors?
- Is there a plan to support `ta_kaldi.fbank` to be batch-compatible, or `torch.export`'able in general?
### Alternatives
_No response_
### Additional context
I was trying to use `torch.export` to extract the AOT graph of the computation, as following:
```python
import torch
import torchaudio.compliance.kaldi as ta_kaldi
def preprocess_forloop(
source: torch.Tensor,
fbank_mean: float = 15.41663, #-12
fbank_std: float = 6.55582, #4
) -> torch.Tensor:
if len(source.shape)!=2: # get sequence data, B x L
return source.transpose(1,2)
fbanks = []
for waveform in source:
waveform = waveform.unsqueeze(0) * 2 ** 15
fbank = ta_kaldi.fbank(waveform, num_mel_bins=128, sample_frequency=48000, frame_length=25, frame_shift=10)
fbanks.append(fbank)
fbank = torch.stack(fbanks, dim=0)
fbank = (fbank - fbank_mean) / (2 * fbank_std)
return fbank
class AudioPreprocessModel(torch.nn.Module):
def forward(self, source: torch.Tensor) -> torch.Tensor:
# source: [B, 720000]
# return: [B, 1498, 128]
return preprocess_forloop(source)
inp = torch.zeros(2, 720000)
bs = torch.export.Dim("bs")
ep = torch.export.export(AudioPreprocessModel(), (inp,), dynamic_shapes={"source": {0: bs}})
```
This code snippet is from BEATs, a popular audio foundation model. The preprocess_forloop function is brought from [preprocess](https://github.com/microsoft/unilm/blob/master/beats/BEATs.py#L118-L131) in BEATs repo.
The code uses `torchaudio.compliance.kaldi.fbank` to extract Mel spectrogram from the raw audio signal
The `fbank` function accepts `[n_channels, T]` tensor and generates `[T_2, mel_bins]` tensor.
# Issues
I tried to `torch.export` this preprocessing function, but encountered multiple obstacles.
🟡: able to get around with patching
🔴: weren’t able to solve
## 1. 🔴 for loop inside the function
The very nature of `fbank` is that it is not batch-compatible, meaning we have to run for loop within a batch to extract spectrograms.
The presence of for loop makes `torch.export` to be challenging, since TorchDynamo graph capturing attempts to unroll the loop, which can vary depending on the batch size. The graph capturing and subsequent `torch.export` fails with this code.
We can use [torch.map](https://pytorch.org/docs/stable/generated/exportdb/torch.map.html) to alleviate this issue, something like as follows.
```python
from functorch.experimental.control_flow import map as torch_map
def preprocess_torchmap(
source: torch.Tensor,
fbank_mean: float = 15.41663, #-12
fbank_std: float = 6.55582, #4
) -> torch.Tensor:
def fbank_body(waveform: torch.Tensor) -> torch.Tensor:
waveform = waveform.unsqueeze(0) * 2 ** 15
# [1, 720000] -> [1498, 128]
fb = ta_kaldi.fbank(waveform, num_mel_bins=128, sample_frequency=48000, frame_length=25, frame_shift=10)
return fb
fbank = torch_map(fbank_body, source)
fbank = (fbank - fbank_mean) / (2 * fbank_std)
return fbank
```
However the `torch.map` didn’t supprt CUDA tensors, so I eventually weren’t able to export the function.
## 2. 🟡 Intermediate tensor generation
The [`_get_epsilon`](https://github.com/pytorch/audio/blob/main/src/torchaudio/compliance/kaldi.py#L35-L36) function encountered export failure, as it attempted to make a intermediate tensor with unknown value ahead of time.
Resolved with `torch._dynamo.assume_constant_result` decorator, with following monkey patching:
```python
ta_kaldi._get_epsilon = torch._dynamo.assume_constant_result(ta_kaldi._get_epsilon)
```
## 3. 🟡 Symbolic int does not support python native functions
The [`_next_power_of_2`](https://github.com/pytorch/audio/blob/main/src/torchaudio/compliance/kaldi.py#L39-L41) function encountered export failure, because the symbolic representation of python `int` didn’t support `.bit_length()` operation.
Resolved with `math.log2` function, albeit being much inefficient computation.
```python
def _next_power_of_2(x: int) -> int:
r"""Returns the smallest power of 2 that is greater than x"""
return 1 if x == 0 else 2 ** math.ceil(math.log2(x))
ta_kaldi._next_power_of_2 = _next_power_of_2
```
## 4. 🟡 `torch.tensor` generation within torch map
The creation of tensor with `torch.tensor` inside the `torch.map` body creates a failure mode at a graph verification phase.
Attached a repro code [gist](https://gist.github.com/swmoon00/536038b1291a1b1970614757a903edf0) snippet.
I substituted `torch.tensor` creation with `torch.ones` creation, in the [`_get_log_energy`](https://github.com/pytorch/audio/blob/main/src/torchaudio/compliance/kaldi.py#L116-L122) function which caused failing.
```python
def _get_log_energy(strided_input: torch.Tensor, epsilon: torch.Tensor, energy_floor: float) -> torch.Tensor:
r"""Returns the log energy of size (m) for a strided_input (m,*)"""
device, dtype = strided_input.device, strided_input.dtype
log_energy = torch.max(strided_input.pow(2).sum(1), epsilon).log() # size (m)
if energy_floor == 0.0:
return log_energy
log_energy_flow_tensor = torch.ones(1, device=device, dtype=dtype)[0] * math.log(energy_floor)
return torch.max(log_energy, log_energy_flow_tensor)
# return torch.max(log_energy, torch.tensor(math.log(energy_floor), device=device, dtype=dtype))
ta_kaldi._get_log_energy = _get_log_energy
```
## 5. 🔴 `torch.fft.rfft` does not support `bfloat16`
This is related to complex number support on torch library function. Not straightforward to fix it.
Created a separated [issue](https://github.com/pytorch/pytorch/issues/139313) on this.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,enhancement,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,625,002,257 | excalidraw | Arrow navigation through flow chart is not intuitive | While the next flowchart node is created where the user points with the arrow, navigating the flowchart tree does not respect this, which is not very intuitive.
Instead, consider navigating to the closest node in the given direction by the pressed arrow (i.e. based on distance from/to the center).
Note that this also might not be constrained only to the flowchart itself, but could enable keyboard nav for any elements in the scene.
| UX/UI,keyboard | low | Minor |
2,625,015,313 | pytorch | Plan to support “discrete” dynamic dimension on torch.export | ### 🚀 The feature, motivation and pitch
Is there any plan for the `torch.export` team to support the set-based dynamism in addition to range-based dynamism?
Below is a small repro code for discrete `Dim` failure mode. The `torch._check` is supposed to work within discrete sets, but it fails.
```python
import torch
class Model(torch.nn.Module):
def forward(self, x):
# torch._check(2 <= x.shape[0] <= 3) # this works
torch._check(x.shape[0] in {2, 3})
return x
torch.export.export(
Model(),
(torch.zeros(2),),
dynamic_shapes={
"x": {
0: torch.export.Dim("discrete", min=2, max=3)
}
}
)
```
### Alternatives
_No response_
### Additional context
The conventional use of the dynamic dimension on `torch.export` is to use the continuum of the variables like `Dim(min=3, max=12)`, which is suited for accepting dynamic batch size for example.
Our team is trying to support some level of “discrete” dynamism on the neural network, for providing “image” mode and “video” mode for the vision transformer.
To show some idea behind this, here is a pseudocode for doing the operation. The specifics are not same in detail.
```python
import torch
class ImgVideoModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb_dim = 8
self.vid_len = 12
self.img_pos_embed = torch.nn.Parameter(torch.zeros(1, 1, self.emb_dim))
self.vis_pos_embed = torch.nn.Parameter(torch.zeros(1, self.vid_len, self.emb_dim))
self.projector = TransformerWithLargeParameters(...)
def forward(self, x):
# x: [B, 1, C] or [B, 12, C]
B, T, C = x.size()
x = torch.cond(T == 1, lambda t: t + self.img_pos_embed, lambda t: t + self.vis_pos_embed, (x,))
# This is the intended behavior of the model.
# if T == 1:
# x = x + self.img_pos_embed
# else:
# assert T == 12
# x = x + self.vis_pos_embed
x = self.projector(x)
return x
inp = torch.rand(2, 12, 8)
bs = torch.export.Dim("bs")
seq_len = torch.export.Dim("seq_len", min=1, max=12)
ep = torch.export.export(ImgVideoModel(), (inp,), dynamic_shapes={"x": {0: bs, 1: seq_len}})
print(ep)
```
The `torch.export` immediately fails, because the second `lambda` function inside `torch.cond` is not compatible with the length other than 12.
Since there is no way to enforce the model to accept the “set” based constraints other than the continuum of the shapes, our team has no way other than exporting two almost-identical models with sharing large sets of common transformer weights.
cc @ezyang @chauhang @penguinwu @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,export-triaged,oncall: export | low | Critical |
2,625,103,041 | kubernetes | Default Scheduler's node_name Plugin Doesn't Filter Out Any Nodes | ### What happened?
Hello,
We observed that the ```node_name``` scheduler plugin might not be functioning as expected.
In the current [implementation](https://github.com/kubernetes/kubernetes/blob/16f9fdc7057e1f69ff1a44e3dbbcf7b994c3cd29/pkg/scheduler/framework/plugins/nodename/node_name.go#L70C2-L77C2), this plugin’s ```Filter``` function checks if a given node’s ```name``` matches the scheduling pod’s ```spec.nodeName```. So this plugin is expected to filter out any node whose ```name``` does not match at the ```Filter``` stage,
However, according to the Kubernetes [documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename) and our tests, any pod with a non-empty ```spec.nodeName``` will bypass the default scheduler.
This leads to a case: whenever the ```node_name``` plugin is invoked (at the ```Filter``` stage), the pod’s ```spec.nodeName``` must be empty. Then the plugin won't filter out any nodes, making it not functioning.
We're a bit confused about this and want to clarify: is this the correct behavior of the ```node_name``` plugin, or are there scenarios that it would behave differently and meaningful? Thanks for your answer!
### What did you expect to happen?
If our understanding is correct,
then we think this plugin should be removed, as it does not filter out any nodes.
### How can we reproduce it (as minimally and precisely as possible)?
1. By creating a pod with a non-empty ```spec.nodeName```, the scheduler will ignore this pod.
2. Only when creating a pod with an empty ```spec.nodeName```, the scheduler will schedule this pod.
### Anything else we need to know?
/sig scheduling
### Kubernetes version
1.31
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| sig/scheduling,needs-kind,needs-triage | low | Major |
2,625,116,960 | angular | :host pseudo-element does not function with combinator selectors, such as :has(+ .my-class) | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
When writing scss using the :host pseudo-element, I've found that you cannot use combinator selectors within the :has() selector. For example, the following scss does not work:
:host:has(> child-element) {
//doSomething
}
However, this would normally be valid scss for any other given element.
There is a workaround, which is as follows:
::ng-deep {
my-host-component:has(> child-element) {
//doSomething
}
}
However, that requires using ng-deep, which is deprecated, and generally not a great practice.
Any other solutions would be appreciated!
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/trtkcr?file=src%2Fapp%2FmyComponent.ts
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.1.3
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 18.1.2
... animations, cdk, common, compiler, compiler-cli, core, forms
... material, platform-browser
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1801.3
@angular-devkit/core 18.1.3
@angular-devkit/schematics 18.1.3
@angular/build 18.1.3
@angular/cli 18.1.3
@schematics/angular 18.1.3
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.8
### Anything else?
I have found this issue in any version I've tested (Angular 14 and 18), but have not tested others. | area: compiler,core: stylesheets | low | Critical |
2,625,144,899 | neovim | virt_lines doesn't scroll horizontally | ### Problem
[lsp_lines.nvim](https://git.sr.ht/~whynothugo/lsp_lines.nvim) uses `nvim_buf_set_extmark` to create extmarks in a buffer. These extmarks use `virtual_lines` with some spaces on the left so that the properly align with the position of a diagnostic on the line above.
When I scroll to the right on buffer, the extmarks remain anchored in position relative to the _window_, not to the buffer. As a result, the extmarks are not properly positioned relative to the buffer, and the overflowing portion of the extmarks are unreadable.
extmarks and their position are declared for buffers, so their position should be relative to buffers, not windows.
Given that multiple windows can render the same buffer, I cannot take the scroll offset into account when rendering the extmarks, because one the "correct" value varies per-window.
### Steps to reproduce
Add this to your `init.lua`:
```lua
vim.keymap.set("n", "<Leader>x", function()
local ns = vim.api.nvim_create_namespace("test")
local line = {
"hello, this is just some test text. It should be anchored under first line",
}
local virt_lines = { { line } }
vim.api.nvim_buf_set_extmark(0, ns, 0, 0, {
virt_lines = virt_lines,
})
end)
```
Open some file and press `250ix`, then press <kbd>Esc</kbd> and <kbd>Leader</kbd><kbd>x</kbd>.
Scroll to the right.
The virtual text will maintain its absolute position relative to the window, not relative to the buffer.
### Expected behavior
When scrolling to the right, the virtual line should scroll too, keeping its position relative to the buffer.
### Nvim version (nvim -v)
NVIM v0.10.2
### Vim (not Nvim) behaves the same?
n/a
### Operating system/version
Alpine Linux
### Terminal name/version
foot
### $TERM environment variable
foot
### Installation
Alpine packages (edge) | enhancement,marks | low | Minor |
2,625,156,584 | neovim | Netrw does not open remote file | ### Problem
In specific cases, Netrw plugin does not open remote file. Result depends on **file name** and **order** on the command line.
### Steps to reproduce
This **works**:
`nvim -p 'scp://root@REDACTED:22//etc/network/interfaces' 'scp://root@REDACTED:22//etc/network/interfaces.new'`
This does **not**:
`nvim -p 'scp://root@REDACTED:22//etc/network/interfaces.new' 'scp://root@REDACTED:22//etc/network/interfaces'`
The only difference is **order of files**.
Further tests:
**Works**:
`nvim -p 'scp://root@REDACTED:22//tmp/bbb' 'scp://root@REDACTED:22//tmp/bbb.old'`
**Does not work**:
`nvim -p 'scp://root@REDACTED:22//tmp/bbb.old' 'scp://root@REDACTED:22//tmp/bbb'`
### Expected behavior
Neovim always opens both files.
### Nvim version (nvim -v)
NVIM v0.10.2
### Vim (not Nvim) behaves the same?
Yes, Vi IMproved 9.1
### Operating system/version
Archlinux
### Terminal name/version
st 0.9.2
### $TERM environment variable
xterm-256color
### Installation
Standard install from repo (extra) | bug-vim,netrw | low | Minor |
2,625,163,816 | vscode | Explore terminal quick fixes based on selection | Flow:
1. Open terminal
2. Run some command
3. Select the contents
What if here a lightbulb with sparkle showed like in the editor with Explain This and Add Selection to Chat? Having this would make it possible to ask copilot about a command via only the keyboard:
1. ctrl+shift+up to select above command
2. ctrl+. to open quick fix menu
3. Enter to select first option | feature-request,terminal-quick-fix,terminal-chat | low | Minor |
2,625,196,845 | langchain | Agent Executor Submits Tool Ouputs Twice (Or shows same behaviour) | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
gpt4o = ChatOpenAI(
model="gpt-4o",
streaming=True,
extra_body={
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "response",
"strict": True,
"schema": responseSchema
}
}
},
temperature=0.6,)
currentModel = gpt4o
showImageTool = ShowImageTool()
tools = [
showImageTool
]
agent = create_tool_calling_agent(
llm=toolCallingModel,
tools=tools,
prompt=prompt
)
agentExecutor = AgentExecutor(
agent=agent,
tools=tools,
return_intermediate_steps=True,
verbose=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have a AgentExecutor with only 1 tool call, the returned info from tool is not important so I am only returning "tool_done" from the tool execution.
After the execution done, tool outputs get submitted to the model and new step output starts to come, but at some point, model restarts to produce new output (like tool is resubmitted)
Verbose output is as follows:
```
Invoking: `show_image` with `{'imageId': 'test-id'}`
responded: {"content":["<some_content_array>"]}
image_shown: test-id{"content":["some_content_array", "after_image_shown"]}
{"content":["<some_new_content>"]}
```
The model has structured output, so output with this type breaks the structure. (The new content does not comes with new chat message, it just continues to stream like that)
I tried same structure and function in openAI playground, it works correctly in there, whether it calls the tool at the start, middle or end.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Mon Aug 12 20:52:18 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T8122
> Python Version: 3.11.10 (main, Sep 7 2024, 01:03:31) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.133
> langchain_astradb: 0.5.0
> langchain_cli: 0.0.31
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langchain_unstructured: 0.1.5
> langserve: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> aiohttp: 3.10.9
> astrapy: 1.5.2
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> fastapi: 0.112.4
> gitpython: 3.1.43
> gritql: 0.1.5
> httpx: 0.27.2
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.31.0
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> tiktoken: 0.8.0
> tomlkit: 0.12.5
> typer[all]: Installed. No version info available.
> typing-extensions: 4.12.2
> unstructured-client: 0.25.9
> unstructured[all-docs]: Installed. No version info available.
> uvicorn: 0.23.2
``` | 🤖:bug,investigate | low | Critical |
2,625,198,008 | neovim | Assertion `row >=0` failed for oneline error message | ### Problem
Nvim crashes when changing `lines` on `VimEnter` and some error message has to be displayed.
```cpp
#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
#1 0x00007f60947b4ebf in __pthread_kill_internal (threadid=<optimized out>, signo=6) at ./nptl/pthread_kill.c:78
#2 0x00007f6094760c82 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#3 0x00007f60947494f0 in __GI_abort () at ./stdlib/abort.c:79
#4 0x00007f6094749418 in __assert_fail_base (fmt=0x7f60948cdca0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x5629937ec59f "row >= 0",
file=file@entry=0x562993812920 ".../src/nvim/message.c", line=line@entry=2406, function=function@entry=0x5629937c7890 <__PRETTY_FUNCTION__.0> "msg_scroll_flush") at ./assert/assert.c:94
#5 0x00007f6094759592 in __assert_fail (assertion=assertion@entry=0x5629937ec59f "row >= 0", file=file@entry=0x562993812920 ".../src/nvim/message.c", line=line@entry=2406,
function=function@entry=0x5629937c7890 <__PRETTY_FUNCTION__.0> "msg_scroll_flush") at ./assert/assert.c:103
#6 0x000056299358c74b in msg_scroll_flush () at .../src/nvim/message.c:2406
#7 0x00005629936799b7 in ui_flush () at .../src/nvim/ui.c:542
#8 0x000056299358d4df in msg_check_for_delay (check_msg_scroll=false) at .../src/nvim/message.c:3725
#9 0x00005629934835e2 in screenclear () at .../src/nvim/drawscreen.c:222
#10 0x000056299348db0c in screen_resize (width=<optimized out>, height=<optimized out>) at .../src/nvim/drawscreen.c:338
#11 0x00005629935bb14a in did_set_lines_or_columns (args=<optimized out>) at .../src/nvim/option.c:2158
#12 0x00005629935bcfe9 in did_set_option (opt_idx=opt_idx@entry=kOptLines, varp=varp@entry=0x56299393e718 <p_lines>, old_value=..., new_value=..., opt_flags=opt_flags@entry=0, set_sid=set_sid@entry=0, direct=direct@entry=false,
value_replaced=value_replaced@entry=true, errbuf=0x7ffc32184220 "\220&ܶ)V", errbuflen=80) at .../src/nvim/option.c:3478
#13 0x00005629935bdf43 in set_option (opt_idx=kOptLines, varp=varp@entry=0x56299393e718 <p_lines>, value=..., opt_flags=opt_flags@entry=0, set_sid=set_sid@entry=0, direct=direct@entry=false, value_replaced=value_replaced@entry=true,
errbuf=0x7ffc32184220 "\220&ܶ)V", errbuflen=80) at .../src/nvim/option.c:3713
#14 0x00005629935bf694 in do_one_set_option (opt_flags=opt_flags@entry=0, argp=argp@entry=0x7ffc32184208, did_show=did_show@entry=0x7ffc32184217, errbuf=errbuf@entry=0x7ffc32184220 "\220&ܶ)V", errbuflen=errbuflen@entry=80,
errmsg=errmsg@entry=0x7ffc32184218) at .../src/nvim/option.c:1339
#15 0x00005629935bfe23 in do_set (arg=<optimized out>, opt_flags=0) at .../src/nvim/option.c:1389
#16 0x00005629935c0059 in ex_set (eap=<optimized out>) at .../src/nvim/option.c:721
#17 0x00005629934e48a6 in execute_cmd0 (retv=retv@entry=0x7ffc32184334, eap=eap@entry=0x7ffc32184340, errormsg=errormsg@entry=0x7ffc32184338, preview=preview@entry=false) at .../src/nvim/ex_docmd.c:1714
#18 0x00005629934e5fe4 in do_one_cmd (cmdlinep=cmdlinep@entry=0x7ffc32184588, flags=flags@entry=7, cstack=cstack@entry=0x7ffc32184610, fgetline=fgetline@entry=0x56299345510b <getnextac>, cookie=cookie@entry=0x7ffc32184be0)
at .../src/nvim/ex_docmd.c:2358
#19 0x00005629934e6b8f in do_cmdline (cmdline=<optimized out>, fgetline=0x56299345510b <getnextac>, cookie=0x7ffc32184be0, flags=7) at .../src/nvim/ex_docmd.c:667
#20 0x0000562993453d99 in apply_autocmds_group (event=EVENT_VIMENTER, fname=0x5629b6d6e270 "", fname_io=<optimized out>, force=<optimized out>, group=group@entry=-3, buf=0x5629b6d595b0, eap=0x0, data=0x0)
at .../src/nvim/autocmd.c:1842
#21 0x000056299345444d in apply_autocmds (event=<optimized out>, fname=<optimized out>, fname_io=<optimized out>, force=<optimized out>, buf=<optimized out>) at .../src/nvim/autocmd.c:1499
#22 0x000056299355dfeb in main (argc=7, argv=<optimized out>) at .../src/nvim/main.c:605
```
Possible duplicate of https://github.com/neovim/neovim/issues/22919 but the message does not need any newline character. Please close this issue in such case.
### Steps to reproduce
```sh
nvim --clean --cmd 'au VimEnter * set lines=10' -c 'echoerr 1'
```
Cannot reproduce it for Vim (`vim -u NONE --cmd 'au VimEnter * set lines=10' -c 'echoerr 1'`).
### Expected behavior
No crash
### Nvim version (nvim -v)
v0.11.0-dev-1067+gb4599acbf
### Vim (not Nvim) behaves the same?
no, 9.1.777
### Operating system/version
Debian Sid
### Terminal name/version
alacritty 0.15.0-dev (6dbd785b)
### $TERM environment variable
alacritty
### Installation
from repo | ui,has:backtrace,bug-crash,messages | low | Critical |
2,625,236,584 | pytorch | incorrect _unsafe_index meta | ### 🐛 Describe the bug
```
import torch
import torch._inductor.config
torch.set_default_device("cuda")
inp = torch.empty_strided([4, 512, 96, 96], (1, 4, 196608, 2048), device="cuda")
args = [
None,
None,
torch.empty_strided([192, 1], (1, 1), dtype=torch.int64, device="cuda").zero_(),
torch.empty_strided([192], (1,), dtype=torch.int64, device="cuda").zero_(),
]
with torch._subclasses.CrossRefFakeMode():
torch.ops.aten._unsafe_index(inp, args)
```
> RuntimeError: When comparing the output of aten._unsafe_index.Tensor on FakeTensor and concrete Tensors, found mismatched tensor metadata: Stride mismatch! Strides are (36864, 147456, 192, 1) and (18874368, 36864, 192, 1) (mismatched at 0)!
### Versions
master
cc @ezyang @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: fakeTensor,module: decompositions | low | Critical |
2,625,245,409 | pytorch | BFloat16 support for `torch.fft.rfft` | ### 🚀 The feature, motivation and pitch
I'd like to use `torch.fft.rfft` function with bfloat16 tensor, but the operator doesn't support bfloat16 complex type.
Repro code below:
```python
import torch
x = torch.zeros(3, 4, dtype=torch.bfloat16)
torch.fft.rfft(x)
```
Is there any plan to support this in the near future? Thank you!
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames | triaged,module: complex | low | Minor |
2,625,250,537 | deno | CJS export analysis should analyze re-exported ES module exports | Due to require esm, the CJS export anlaysis needs to get smarter and analyze ES module exports.
Version: Deno 2.0.4
| bug | low | Minor |
2,625,279,880 | next.js | Specific data/props case causes infinite loading | ### Link to the code that reproduces this issue
https://github.com/chriseling/nextjs-repro-infinite-loading-issue
### To Reproduce
- start the app (yarn dev)
- notice the page shows the loading.tsx and loads infinitely
### Current vs. Expected behavior
expect the client component to load with "hello world"
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
Available memory (MB): 40960
Available CPU cores: 16
Binaries:
Node: 22.0.0
npm: 10.5.1
Yarn: 1.22.22
pnpm: 9.12.2
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: 15.0.2
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed)
### Additional context
_No response_ | bug,Runtime | low | Minor |
2,625,345,360 | go | proposal: spec: remove notion of core types | This issue replaces the investigative issue [#63940](https://go.dev/issue/63940) with a concrete proposal.
To go straight to the proposal text, skip the **Background** and **Motivation** section.
## Background
The Go 1.18 release introduced generics and with that a number of new concepts, including type parameters and type constraints. A type constraint acts as the "type" of a type parameter by describing the type parameter's properties, similarly to how a struct type describes the properties of a variable of that struct type.
The language requires that any concrete type that instantiates a specific type parameter satisfies the type parameter's constraint. Because of this requirement one can be certain that a value whose type is a type parameter possesses all of that type parameter's properties, no matter what actual, concrete type is used to instantiate the type parameter.
In Go, type constraints are described through a mix of method and type requirements which together define a _type set_: a type set comprises all the types which satisfy all the requirements. Specifically, Go 1.18 uses a generalized form of interfaces for this purpose. An interface enumerates a set of methods and types, and the type set described by such an interface consists of all the types that implement those methods and that are included in the enumerated types.
For instance, the interface
```Go
type Constraint interface {
~[]byte | ~string
Hash() uint64
}
```
consists of all the (possibly named) `[]byte` and `string` types that also implement the `Hash` method.
Given these descriptions of type sets, which in turn describe the properties of type parameters, it is possible to write down the rules that govern operations on operands of type parameter type.
For instance, the [rules for index expressions](https://golang.org/ref/spec#Index_expressions) state that (among other things) for an operand of type parameter type P:
> - The index expression a[x] must be valid for values of all types in P's type set.
> - The element types of all types in P's type set must be identical. In this context, the element type of a string type is byte.
These rules enable the following code ([playground](https://go.dev/play/p/VYfCvpVCx9o)):
```Go
func at[P Constraint](x P, i int) byte {
return x[i]
}
```
The indexing operation `x[i]` is permitted because the type of `x` is `P`, and `P`'s type constraint (type set) contains `[]byte` and `string` types for which indexing with `i` is valid.
## Motivation
The rules for index expressions have specific clauses for when the type of an operand is a type parameter. Similarly, the rules for unary and binary operations also have such clauses. For instance, in the section on [Arithmetic operators](https://golang.org/ref/spec#Arithmetic_operators), the spec says:
> If the operand type is a type parameter, the operator must apply to each type in that type set.
This rule allows for the operator `+` to add two operands of (identical) type parameter type, as long as `+` is valid for any type in the respective type parameter's constraint.
This type set-based and individualized approach permits the most flexible application of operations on operands of type parameter type, and is in line with what the original generic proposal ([Type Parameters Proposal](https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md)) intended: an operation involving operands of generic type (i.e., whose type is a type parameter) should be valid if it is valid for any type in the respective type set(s).
Because of time constraints and the subtlety involved in devising appropriate rules for each language feature that may interact with generic operands, this approach was _not_ chosen for many language features. For instance, for [Send statements](https://golang.org/ref/spec#Send_statements), the spec requires that
> The channel expression's _core type_ must be a channel, the channel direction must permit send operations, and the type of the value to be sent must be assignable to the channel's element type.
This rule relies on the notion of a [_core type_](https://tip.golang.org/ref/spec#Core_types). Core types offer a short cut for the spec: if a type is not a type parameter, its core type is just its [underlying type](https://golang.org/ref/spec#Underlying_types). But for a type parameter, a core type exists if and only if all types in the type parameter's type set (that is, the type set described by the type parameter's constraint interface) have the same underlying type; that single type is the core type of the type parameter. For instance, `interface{ ~[]int }` has a core type (`[]int`), but the `Constraint` interface from above does not. (In reality, the definition of core types is subtly more complicated, see below.)
The notion of a core type is a generalization of the notion of an underlying type. Because of that, pre-generics most spec rules that relied on underlying types now rely on core types, with a few important exceptions like the ones mentioned earlier. If the rules for index expressions were relying on core types, the `at` example above would not be valid code. Because the rules for [Slice expressions](https://golang.org/ref/spec#Slice_expressions) do rely on core types, slicing an operand of type `P` constrained by `Constraint` is not permitted, even though it could be valid and might be useful.
When it comes to channel operations and certain built-in calls (`append`, `copy`) the simplistic definition of core types is insufficient. The actual rules have adjustments that allow for differing channel directions and type sets containing both `[]byte` and `string` types. These adjustments make the definition of core types rather complicated, and are only present to work around unacceptable restrictions that would be imposed by the language otherwise.
## Proposal
Summary: **Remove the notion of core types from the language specification in favor of dedicated prose in each section that previously relied on core types.**
For example, rather than using a rule based on core types in the section on slice expressions, the proposal is to use appropriate prose similar to what is used for index expressions, which does not rely on core types (and which is more flexible as a result).
The proposed approach is as follows:
- For each operation/language feature with rules based on core types, revert the relevant language spec section to essentially the Go 1.17 (pre-generics) prose, and add a type-parameter specific paragraph that describes how the rules apply to generic operands.
- Remove the section on core types from the language spec.
- Implement the necessary changes in the compiler.
The proposed changes to the spec can be reviewed in [CL 621919](https://go.dev/cl/621919) and are considered part of this proposal. (The exact prose is up for discussion and expected to be fine-tuned.)
> [!NOTE]
> [CL 621919](https://go.dev/cl/621919) still contains references to core types in the section on type inference and unification. We plan to rewrite those sections as needed and remove those references as well. Since these parts of the spec are highly technical and detailed, we are less concerned about their exact prose: these sections are unlikely to be consulted by non-experts in the first place. To get started, we may simply replicate the notion of core types "in line" until we understand better what changes to type inference preserve backward-compatibility.
## Discussion
Removing the notion of core types from the language specification has multiple benefits:
- There is one less core concept (no pun intended) that needs to be learned and understood.
- The specification of most language features can again be understood without worrying about generics.
- The spec becomes easier to read and understand.
- The individualized approach (specific rules for specific operations) opens the door to more flexible rules.
The changes are designed to be 100% backward-compatible.
## Implementation
The relevant implementation changes primarily affect the compiler's type checker.
The proposed changes to `for-range` statements currently include an implementation restriction for the range-over-func case; loosening or removing that restriction may require compiler back-end changes for an efficient implementation (currently not planned).
The relevant type checker changes have been prototyped and can be found in a stack of CLs ending in [CL 618376](https://go.dev/cl/618376).
## Impact
Because the changes are designed to be 100% backward-compatible, implementing this proposal is expected to be unnoticeable for existing Go code.
Some code that currently is not permitted will become valid with this change. For instance, slice expressions, composite literals, and for-range statements will accept generic operands and types with less restricted type sets.
Analysis tools may need to be updated. We believe that this can be done incrementally, on a (language) feature-by-feature basis.
## Tentative time line
This time line assumes that this proposal is uncontroversial and accepted fairly quickly.
- Early November 2024: Proposal published.
- End of 2024: Proposal accepted (hopefully).
- Early February 2025: Proposal implemented at start of development cycle for Go 1.25.
- August 2025: Proposal released in Go 1.25.
## Future directions
The use of core types in the spec implied a somewhat rigid framework within which rules for language features were considered.
For instance, proposal [#48522](https://go.dev/issue/48522) is about permitting access to a struct field that is present in all structs in a type set ([example](https://go.dev/play/p/g7ZxOq-uDBK)). This is currently not permitted and the proposal was closed in favor of [#63940](https://go.dev/issue/63940), the precursor issue for this proposal.
If we accept this proposal, we will follow up with a proposal for more flexible field access, along the lines of [#48522](https://go.dev/issue/48522).
Type inference and type unification also rely on core types. Removing this dependency may enable type inference in some cases (such as [#69153](https://go.dev/issue/69153)) where it currently fails.
| Proposal,Proposal-Hold | high | Critical |
2,625,352,403 | svelte | The documentation section on passing the values layout->page->component and component->page->layout. | ### Describe the problem
Guys, I would like to have a section in the documentation about transferring values from the layout to the page, to the component and back. Especially when you get used to the new svelte 5 syntax.
### Describe the proposed solution
New section in documentation.
### Importance
would make my life easier | documentation | low | Minor |
2,625,416,708 | neovim | comment: injected filetype is used on linewise motions | ### Problem
See the repro below
### Steps to reproduce
```lua
vim.cmd('echo 1')
return vim.cmd('echo 1')
```
In the above lua file type `gg0fe` then `gcG`. The result is
```lua
"vim.cmd('echo 1')
"return vim.cmd('echo 1')
```
### Expected behavior
```lua
-- vim.cmd('echo 1')
-- return vim.cmd('echo 1')
```
like by `VGgc`.
### Nvim version (nvim -v)
v0.11.0-dev-1067+gb4599acbf
### Vim (not Nvim) behaves the same?
no
### Operating system/version
Debian Sid
### Terminal name/version
alacritty
### $TERM environment variable
alacritty
### Installation
from repo | bug,comment | low | Minor |
2,625,419,185 | godot | _gui_input is not propagated to parent for keyboard and joy inputs | ### Tested versions
- Tested in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 with Max-Q Design (NVIDIA; 31.0.15.5123) - Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 Threads)
### Issue description
_gui_input is only called on the focused control for keyboard and joypad inputs and is not propagated to parent even if accept_event is NOT called.
Note that is does not affect mouse events (when the mouse_filter of the child is set to Pass).
### Steps to reproduce
- Launch the MRP, the button on screen will be automatically focused
- Press space, notice in the output that _gui_input is only called on the child.
- Notice that if you move the mouse on the button or even click on it, _gui_input is called on both the child and the parent.
### Minimal reproduction project (MRP)
[bug-gui-input-propagation.zip](https://github.com/user-attachments/files/17579412/bug-gui-input-propagation.zip)
| enhancement,discussion,topic:input,topic:gui | low | Critical |
2,625,455,452 | flutter | ShaderCompilerException: Could not write file to build\flutter_assets\shaders/ink_sparkle.frag | First known to have appeared (at least with this stack trace) in 3.23.0
```
ShaderCompilerException: Shader compilation of "C:\Development\flutter\packages\flutter\lib\src\material\shaders\ink_sparkle.frag" to "build\flutter_assets\shaders/ink_sparkle.frag" failed with exit code 1.
impellerc stdout:
impellerc stderr:
Could not write file to build\flutter_assets\shaders/ink_sparkle.frag
at ShaderCompiler.compileShader(shader_compiler.dart:190)
at <asynchronous gap>(async)
at writeBundle.<anonymous closure>(bundle_builder.dart:221)
at <asynchronous gap>(async)
at Future.wait.<anonymous closure>(future.dart:534)
at <asynchronous gap>(async)
at writeBundle(bundle_builder.dart:187)
at <asynchronous gap>(async)
at WebDevFS.update(devfs_web.dart:1005)
at <asynchronous gap>(async)
at ResidentWebRunner._updateDevFS(resident_web_runner.dart:583)
at <asynchronous gap>(async)
at ResidentWebRunner.run.<anonymous closure>(resident_web_runner.dart:334)
at <asynchronous gap>(async)
at asyncGuard.<anonymous closure>(async_guard.dart:111)
at <asynchronous gap>(async)
```
source:
https://github.com/flutter/flutter/blob/ac7879e2aa6de40afec1fe2af9730a8d55de3e06/packages/flutter_tools/lib/src/build_system/tools/shader_compiler.dart#L189-L196 | c: crash,P2,team-tool,triaged-tool | low | Critical |
2,625,462,558 | go | os: TestPipeThreads failing sporadically on aix-ppc64 | ### Go version
master
### Output of `go env` in your module/workspace:
```shell
GOOS=aix
```
### What did you do?
```
#build go.
go test os -test.run=TestPipeThreads -test.count=1
```
### What did you see happen?
The tests fails consistently on aix systems with GOMAXPROC >= 8. It fails sporadically with smaller values, as is seen with aix-ppc64 CI. This CI [log](https://build.golang.org/log/f58a9df223e4f6179ce1ec967c5ec8fa219cdefa) seems like the first of the recent failures. However, it is reproducible prior to the 1.23 release.
### What did you expect to see?
The test passes consistently. | NeedsInvestigation | low | Critical |
2,625,477,055 | vscode | Typo in debugger |
Type: <b>Bug</b>
There's a typo in debugger in the breakpoints. Specifically, "Toggle Activate Breakpoints" should probably be "Toggle active breakpoints".
VS Code version: Code 1.92.0 (Universal) (b1c0a14de1414fcdaa400695b4db1c0799bc3124, 2024-07-31T23:26:45.634Z)
OS version: Darwin arm64 24.0.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|4, 3, 3|
|Memory (System)|64.00GB (3.20GB free)|
|Process Argv|--crash-reporter-id eb3ac444-ca87-45c1-b2ff-c5ac56c7837e|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (35)</summary>
Extension|Author (truncated)|Version
---|---|---
c3-ai-dx-v8|c3a|4.0.6
vale-vscode|chr|0.20.0
excel-to-markdown-table|csh|1.3.0
vscode-markdownlint|Dav|0.56.0
EditorConfig|Edi|0.16.4
vscode-pull-request-github|Git|0.98.0
gc-excelviewer|Gra|4.2.62
rest-client|hum|0.25.1
markdown-shortcuts|mdi|0.12.0
rainbow-csv|mec|3.12.0
debugpy|ms-|2024.12.0
isort|ms-|2023.10.1
python|ms-|2024.14.1
vscode-pylance|ms-|2024.10.1
jupyter|ms-|2024.7.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.388.0
fabric8-analytics|red|0.9.5
java|red|1.35.1
vscode-xml|red|0.27.1
code-spell-checker|str|3.0.1
markdowntable|Tak|0.12.0
intellicode-api-usage-examples|Vis|0.2.8
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.0
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
markdown-pdf|yza|1.5.0
markdown-all-in-one|yzh|3.6.2
grammarly|znc|0.24.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
iacca2:31156134
notype1cf:31157160
5fd0e150:31155592
```
</details>
<!-- generated by issue reporter --> | under-discussion | low | Critical |
2,625,484,564 | pytorch | [FR] Support sub-group partition ProcessGroup | ### 🚀 The feature, motivation and pitch
This is something I'd like to achieve:
```py
def custom_partition_data_and_model_subgroup(input_group: ProcessGroup):
node_list = get_node_list(input_group)
outer_process_group , inner_process_group = partition_subgroup(node_list, subgroup_size=4)
return outer_process_group , inner_process_group
```
Is it something that can be achieved by current Pytorch distributed interface (lacking doc)? or something that can be supported in the future?
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Major |
2,625,521,605 | vscode | Narrow sidebar chat layout issues | Continue https://github.com/microsoft/vscode/issues/231369
- Padding around welcome view text
- And wrapped edits title:

| bug,panel-chat | low | Minor |
2,625,560,284 | pytorch | torch.nn.ReflectionPad2d throws CUDA error on large tensors | ### 🐛 Describe the bug
Running `torch.nn.ReflectionPad2d` with a large float32 tensor with CUDA causes it to throw `RuntimeError: CUDA error: invalid configuration argument` (also reproducible with `CUDA_LAUNCH_BLOCKING=1`). This error does not occur with smaller tensors on CUDA, also it runs fine on cpu with both small and large tensors.
Colab repro: [colab](https://colab.research.google.com/drive/1dmgWj-3VaLwYx_nsRjUCMr5sKp2Uka17?usp=sharing)
Minimal repro:
```python
import torch
padding = (0, 1)
input_1 = torch.randn(1, 50000, 2, dtype=torch.float32)
input_2 = torch.randn(1, 65540, 2, dtype=torch.float32)
m = torch.nn.ReflectionPad2d(padding=padding)
output_cpu = m(input_2) # no error
output_gpu_1 = m.cuda()(input_1.cuda()) # no error
output_gpu_2 = m.cuda()(input_2.cuda()) # RuntimeError: CUDA error: invalid configuration argument
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 4118.642
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: cuda,triaged | low | Critical |
2,625,563,732 | pytorch | Enable opting out of CI experiments | We use this issue to configure issue rollouts and allow people to opt into experiments early.
https://github.com/pytorch/test-infra/issues/5132
However, there's no mechanism to opt out if desired.
Ask:
Update target_determinator.py (which parses the above issue) to allow people to opt out of a given experiment by prefixing the experiment with a `-`.
For example, specifying `@SomeUser,lf,-awsa100` would opt SomeUser into the `lf` experiment but out of the `awsa100` experiment. | triaged | low | Minor |
2,625,568,026 | pytorch | Hash results are not same in `torch-2.6.0.dev20241029+cu121` | ### 🐛 Describe the bug
I exported the same model twice but the hash results are not same, for `torch-2.6.0.dev20241029+cu121`. I expect `hash1` equals `hash2`. The following function failed for `torch-2.6.0.dev20241029+cu121` but works for `torch-2.6.0.dev20241028+cu121`.
Repro:
```python
def test_reexport_is_equal(self):
from torch._inductor.codecache import FxGraphCachePickler
import copy
pyt_model = models.resnet18(pretrained=True).eval().to("cuda")
example_inputs = (torch.randn((100, 3, 224, 224)).to("cuda"),)
batch = torch.export.Dim("batch", min=1, max=200)
exp_program1 = torch.export.export(
pyt_model, args=example_inputs, dynamic_shapes={"x": {0: batch}}
)
gm1 = copy.deepcopy(exp_program1.module())
hash1 = FxGraphCachePickler.get_hash(gm1)
exp_program2 = torch.export.export(
pyt_model, args=example_inputs, dynamic_shapes={"x": {0: batch}}
)
gm2 = copy.deepcopy(exp_program2.module())
hash2 = FxGraphCachePickler.get_hash(gm2)
self.assertEqual(hash1, hash2)
```
### Versions
PyTorch version: 2.6.0.dev20241029+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 535.183.06
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.3
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.3
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.3
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.3
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.3
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.3
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 6000.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.10.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.16.1
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxconverter-common==1.14.0
[pip3] onnxmltools==1.12.0
[pip3] onnxruntime==1.18.1
[pip3] onnxruntime_extensions==0.12.0
[pip3] onnxruntime-gpu==1.18.0
[pip3] onnxsim==0.4.36
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241029+cu121
[pip3] torch_tensorrt==2.6.0.dev0+225c03cf3
[pip3] torchaudio==2.5.0.dev20241029+cu121
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.0.dev20241029+cu121
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241029+cu121 pypi_0 pypi
[conda] torch-tensorrt 2.6.0.dev0+225c03cf3 dev_0 <develop>
[conda] torchaudio 2.5.0.dev20241029+cu121 pypi_0 pypi
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241029+cu121 pypi_0 pypi
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | module: regression,oncall: pt2,export-triage-review,oncall: export | low | Critical |
2,625,598,550 | langchain | ImportError: Dependencies for InstructorEmbedding not found. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import HuggingFaceInstructEmbeddings
DEVICE = "cpu"
#Initialize embeddings using a pre-trained model to represent the text data.
embedddings_model = "sentence-transformers/multi-qa-distilbert-cos-v1"
embeddings = HuggingFaceInstructEmbeddings(
model_name=embedddings_model,
model_kwargs={"device": DEVICE}
)
```
### Error Message and Stack Trace (if applicable)
```
USER_AGENT environment variable not set, consider setting it to identify your requests.
Traceback (most recent call last):
File "/home/tda/miniconda3/envs/ragbot2/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 188, in __init__
from InstructorEmbedding import INSTRUCTOR
ModuleNotFoundError: No module named 'InstructorEmbedding'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/tda/GenAICourse/RAG/ragbot_langchain/app.py", line 38, in <module>
embeddings = HuggingFaceInstructEmbeddings(
File "/home/tda/miniconda3/envs/ragbot2/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 194, in __init__
raise ImportError("Dependencies for InstructorEmbedding not found.") from e
ImportError: Dependencies for InstructorEmbedding not found.
```
### Description
I'm trying to build embeddings for a RAG application and I get the error
ImportError: Dependencies for InstructorEmbedding not found.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #134~20.04.1-Ubuntu SMP Tue Oct 1 15:27:33 UTC 2024
> Python Version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
Package Information
-------------------
> langchain_core: 0.3.14
> langchain: 0.3.5
> langchain_community: 0.3.3
> langsmith: 0.1.138
> langchain_chroma: 0.1.4
> langchain_mistralai: 0.2.0
> langchain_openai: 0.2.4
> langchain_text_splitters: 0.3.1
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> chromadb: 0.5.16
> dataclasses-json: 0.6.7
> fastapi: 0.115.4
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.53.0
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.20.1
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,625,604,754 | ollama | ollama ROCm multiple gpus, segfaulted when trying to run model bigger than 1 GPU's memory capacity | ### What is the issue?
Segmentation fault when trying to run a model
### Command
ollama run llama3.1:70b
### Error
Error: llama runner process has terminated: signal: segmentation fault (core dumped)
### Dmesg
[ 2857.607412] ollama_llama_se[18031]: segfault at 18 ip 00007e8f7e127b66 sp 00007ffd70563640 error 4 in libamdhip64.so.6.1.60102[7e8f7de21000+371000] likely on CPU 13 (core 17, socket 0)
[ 2857.607438] Code: 0d 4c 89 ef e8 3b 80 00 00 e9 bf fd ff ff 83 87 98 01 00 00 01 e9 b3 fd ff ff 48 89 c5 e9 e6 4d d3 ff 66 90 53 48 89 fb 89 d7 <4c> 8b 43 18 4d 85 c0 74 41 4c 8b 4b 20 31 c9 31 c0 eb 12 0f 1f 80
robert@robert-mercury:~$
### Logs from Ollama
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 0: general.architecture str = llama
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 1: general.type str = model
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 3: general.finetune str = Instruct
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 5: general.size_label str = 70B
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 6: general.license str = llama3.1
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 9: llama.block_count u32 = 80
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 10: llama.context_length u32 = 131072
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 11: llama.embedding_length u32 = 8192
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 64
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 17: general.file_type u32 = 2
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 28: general.quantization_version u32 = 2
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type f32: 162 tensors
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type q4_0: 561 tensors
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type q6_K: 1 tensors
Oct 30 19:29:31 robert-mercury ollama[15528]: time=2024-10-30T19:29:31.057-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_vocab: special tokens cache size = 256
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_vocab: token to piece cache size = 0.7999 MB
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: format = GGUF V3 (latest)
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: arch = llama
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: vocab type = BPE
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_vocab = 128256
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_merges = 280147
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: vocab_only = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ctx_train = 131072
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd = 8192
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_layer = 80
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_head = 64
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_head_kv = 8
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_rot = 128
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_swa = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_head_k = 128
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_head_v = 128
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_gqa = 8
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_k_gqa = 1024
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_v_gqa = 1024
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_norm_eps = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_logit_scale = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ff = 28672
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_expert = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_expert_used = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: causal attn = 1
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: pooling type = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope type = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope scaling = linear
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: freq_base_train = 500000.0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: freq_scale_train = 1
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope_finetuned = unknown
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_conv = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_inner = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_state = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_dt_rank = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model type = 70B
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model ftype = Q4_0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model params = 70.55 B
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: LF token = 128 'Ä'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: max token length = 256
Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: found 6 ROCm devices:
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 3: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 4: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 5: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: llm_load_tensors: ggml ctx size = 2.37 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: time=2024-10-30T19:29:37.525-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Oct 30 19:29:37 robert-mercury ollama[15528]: time=2024-10-30T19:29:37.911-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloading 80 repeating layers to GPU
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloading non-repeating layers to GPU
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloaded 81/81 layers to GPU
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm0 buffer size = 6426.88 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm1 buffer size = 6426.88 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm2 buffer size = 6426.88 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm3 buffer size = 5967.82 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm4 buffer size = 5967.82 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm5 buffer size = 6330.74 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: CPU buffer size = 563.62 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_ctx = 8192
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_batch = 512
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_ubatch = 512
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: flash_attn = 0
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: freq_base = 500000.0
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: freq_scale = 1
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm0 KV buffer size = 448.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm1 KV buffer size = 448.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm2 KV buffer size = 448.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm3 KV buffer size = 416.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm4 KV buffer size = 416.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm5 KV buffer size = 384.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm_Host output buffer size = 2.08 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm0 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm1 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm2 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm3 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm4 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm5 compute buffer size = 1216.02 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm_Host compute buffer size = 80.02 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: graph nodes = 2566
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: graph splits = 7
Oct 30 19:29:51 robert-mercury ollama[15528]: time=2024-10-30T19:29:51.816-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Oct 30 19:29:52 robert-mercury ollama[15528]: time=2024-10-30T19:29:52.067-04:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)"
Oct 30 19:29:52 robert-mercury ollama[15528]: [GIN] 2024/10/30 - 19:29:52 | 500 | 21.309799963s | 127.0.0.1 | POST "/api/generate"
Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.068-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000938426 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574
Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.317-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.250407502 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574
Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.567-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500792457 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.14 | bug,amd | low | Critical |
2,625,608,807 | rust | Name change cargo check fail | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
cargo check
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc --version --verbose
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
### Error output
```
cargo check
Checking zingolib v0.2.0 (/home/nattyb/src/zingolabs/zingolibs/fix_names/zingolib)
thread 'rustc' panicked at compiler/rustc_middle/src/query/on_disk_cache.rs:518:5:
assertion `left == right` failed
left: 234403281
right: 1002111927320821928687967599834759150
stack backtrace:
0: 0x78c4a83ce6ea - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h304520fd6a30aa07
1: 0x78c4a8c19525 - core::fmt::write::hf5713710ce10ff22
2: 0x78c4a9a34d91 - std::io::Write::write_fmt::hda708db57927dacf
3: 0x78c4a83d0dbb - std::panicking::default_hook::{{closure}}::he1ad87607d0c11c5
4: 0x78c4a83d0a2e - std::panicking::default_hook::h81c8cd2e7c59ee33
5: 0x78c4a759a5d7 - std[5204e9590b4985ef]::panicking::update_hook::<alloc[fd15fd9026f491e1]::boxed::Box<rustc_driver_impl[c41f2638408ed175]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x78c4a83d16d7 - std::panicking::rust_panic_with_hook::had2118629c312a4a
7: 0x78c4a83d1397 - std::panicking::begin_panic_handler::{{closure}}::h7fa5985d111bafa2
8: 0x78c4a83ceb99 - std::sys::backtrace::__rust_end_short_backtrace::h704d151dbefa09c5
9: 0x78c4a83d1064 - rust_begin_unwind
10: 0x78c4a5a58413 - core::panicking::panic_fmt::h3eea515d05f7a35e
11: 0x78c4a6fb837e - core::panicking::assert_failed_inner::h11b1378688fb0090
12: 0x78c4a7aeb747 - core[d89802b8f5f07590]::panicking::assert_failed::<u128, u128>
13: 0x78c4a9b39d38 - <rustc_middle[c83967c7761a8780]::query::on_disk_cache::OnDiskCache>::new
14: 0x78c4a9b49cdb - rustc_incremental[229da3a1dfe91e94]::persist::load::load_query_result_cache
15: 0x78c4a9b4abc2 - <rustc_interface[706ab71263ce060a]::queries::Queries>::global_ctxt
16: 0x78c4a9985da6 - rustc_interface[706ab71263ce060a]::interface::run_compiler::<core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>, rustc_driver_impl[c41f2638408ed175]::run_compiler::{closure#0}>::{closure#1}
17: 0x78c4a9a3ed16 - std[5204e9590b4985ef]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[706ab71263ce060a]::util::run_in_thread_with_globals<rustc_interface[706ab71263ce060a]::interface::run_compiler<core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>, rustc_driver_impl[c41f2638408ed175]::run_compiler::{closure#0}>::{closure#1}, core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>>
18: 0x78c4a9a779b0 - <<std[5204e9590b4985ef]::thread::Builder>::spawn_unchecked_<rustc_interface[706ab71263ce060a]::util::run_in_thread_with_globals<rustc_interface[706ab71263ce060a]::interface::run_compiler<core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>, rustc_driver_impl[c41f2638408ed175]::run_compiler::{closure#0}>::{closure#1}, core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[d89802b8f5f07590]::result::Result<(), rustc_span[233999951ced9cd1]::ErrorGuaranteed>>::{closure#1} as core[d89802b8f5f07590]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
19: 0x78c4a9a77d2b - std::sys::pal::unix::thread::Thread::new::thread_start::hcdbd1049068002f4
20: 0x78c4aaf3a39d - <unknown>
21: 0x78c4aafbf49c - <unknown>
22: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: could not compile `zingolib` (lib)
1730331920 || shofth nattyb fix_names[ ]
/home/nattyb/src/zingolabs/zingolibs/fix_names RC: 101 $
git show -s
commit d46d1af84165cbd373b7d94f141b3a268c354731 (HEAD -> fix_names, zancos/fix_names)
Author: zancas <zancas@zingolabs.org>
Date: Wed Oct 30 17:39:50 2024 -0600
wip
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
<backtrace>
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug | low | Critical |
2,625,617,014 | pytorch | torch.logcumsumexp runtime error inconsistency on CPU and CUDA (also documentation missing constraint) | ### 🐛 Describe the bug
The documentation for `torch.logcumsumexp` does not explicitly mention that the input tensor can not be integer, even though passing an integer tensor throws a "not implemented" error.
Moreover, the error is not consistent accross cuda and cpu. For CPU, any integer tensors trigger the error, but with cuda, 0 dimensional tensors do not throw the error [colab](https://colab.research.google.com/drive/1ics5oo9benX_YhBE80QFwjQrNPgSXuOn?usp=sharing)
Minimal repro:
```python
import torch
output_gpu = torch.logcumsumexp(torch.tensor(22, dtype=torch.int64).cuda(), 0) # no runtime error
output_gpu = torch.logcumsumexp(torch.tensor([22], dtype=torch.int64).cuda(), 0) # error
output_cpu = torch.logcumsumexp(torch.tensor(22, dtype=torch.int64), 0) # error
```
The behavior needs to be consistent (e.g. since for scalar values, no cumulitive sum is calculated, the value can be returned as is on CPU as well, or CUDA should throw the error even for scalars). Also, it would be good to have it specified on the doc that integer tensors are not allowed as inputs.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 4118.642
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: cuda,triaged,module: python frontend | low | Critical |
2,625,658,010 | pytorch | torch.nansum does not work with complex numbers on CPU | ### 🐛 Describe the bug
`torch.nansum` does not work with complex tensors containing `nan` values on CPU (works on GPU) ([colab](https://colab.research.google.com/drive/1b_3zgqEQqdFjKOW-TisFy_ED47Xo522Y?usp=sharing))
Minimal repro:
```python
import torch
input_tensor = torch.tensor([1.2+0j,1.5+0j, 1+torch.nan * 1j])
print(torch.isnan(input_tensor)) # tensor([False, False, True])
print(torch.isnan(input_tensor.cuda())) # tensor([False, False, True], device='cuda:0')
print(torch.nansum(input_tensor.cuda())) # tensor(2.7000+0.j, device='cuda:0')
output_cpu = torch.nansum(input_tensor) # RuntimeError: nansum does not support complex inputs
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 4118.642
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames | triaged,module: complex,module: NaNs and Infs | low | Critical |
2,625,662,007 | rust | rust-gdb: Accessing tuple inside enum triggers invalid gdb expression syntax error. | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
fn main() {
let a = Some((0u32, 1u32));
dbg!(a);
}
```
When debugging in gdb, I expect to be able to write an expression to access a value inside a tuple, inside an enum. I want to do that so that I can e.g. dereference a pointer stored in such a tuple. It seems that `a.0` accesses the `Some` branch of my option, so I would expect `a.0.0` to access the first tuple elem.
Instead, `a.0.0` triggers a gdb expression syntax error. Note that it *does* work to append `.0` to a gdb variable storing `a.0` to access the tuple, which further suggests that `a.0.0` should work.
```
Breakpoint 2.1, gdb_tuple::main () at src/main.rs:3
3 dbg!(a);
(gdb) p a
$1 = core::option::Option<(u32, u32)>::Some((
0,
1
))
(gdb) p a.0
$2 = (
0,
1
)
(gdb) p a.0.0
field name expected
(gdb) p $2.0
$3 = 0
```
`a.0` to access the `Some` admittedly seems like a bit of black magic. Helper functions like `unwrap` are consistently not available in the binaries that I've tried to debug thus far, so I'm not sure what else I should be doing.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (439284741 2024-10-21)
binary: rustc
commit-hash: 4392847410ddd67f6734dd9845f9742ff9e85c83
commit-date: 2024-10-21
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
``` | C-bug,WG-debugging | low | Critical |
2,625,662,762 | pytorch | [ROCm] sdpa group query attention bf16 numeric error | ### 🐛 Describe the bug
Hi AMD Team,
On MI300X pytorch nightly grouped query attention is running into numeric errors. I have confirmed on H100 that this script does not have numeric errors.
Can you look into this & potentially add an numeric unit test for this?
cc: @hliuca
## ROCm Error
```python
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 8388584 / 8388608 (100.0%)
Greatest absolute difference: 0.99609375 at index (0, 0, 0, 15) (up to 1e-05 allowed)
Greatest relative difference: inf at index (0, 0, 1, 0) (up to 0.016 allowed)
```
## Reprod Script
```python
import torch
from torch.nn.functional import scaled_dot_product_attention
batch = 4
seq_len_q = 1024
seq_len_kv = 1024
D = 128
query = torch.randn(batch, 32, seq_len_q, D, device='cuda', dtype=torch.bfloat16)
key = torch.randn(batch, 8, seq_len_kv, D, device='cuda', dtype=torch.bfloat16)
value = torch.randn(batch, 8, seq_len_kv, D, device='cuda', dtype=torch.bfloat16)
output_gqa = scaled_dot_product_attention(query, key, value, is_causal=True, enable_gqa=True)
key = key.repeat_interleave(4,1)
value = value.repeat_interleave(4,1)
output_repeat = scaled_dot_product_attention(query, key, value, is_causal=True)
torch.testing.assert_close(output_gqa, output_repeat)
```
### Versions
## ROCm Versions
```
~$ pip list | grep torch
pytorch-triton-rocm 3.1.0+cf34004b8a
torch 2.6.0.dev20241030+rocm6.2
```
## H100 Versions
```
~$ pip list | grep torch
pytorch-triton 3.1.0+cf34004b8a
torch 2.6.0.dev20241030+cu124
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: rocm,triaged | low | Critical |
2,625,685,485 | vscode | Test and iterate on TS expandable hover apis | To try it:
- Build this TS branch and set it as your workspace TS version: https://github.com/microsoft/TypeScript/pull/59940
- In VS Code `"typescript.experimental.expandableHover": true`
Let's test out TS's current implementation of expandable hovers. The main question is if the UI is good enough as-is, or if it turns out we need something different, which may require API changes | feature-request,editor-hover,on-testplan | low | Major |
2,625,698,148 | flutter | Semantics inside RawAutocomplete are displayed behind other UI elements | ### Steps to reproduce
1. Create an AutoComplete widget with a few items
2. Place that widget as a sibling in a column of other widgets
3. Enter Talkback on iOS
4. Run the app
5. Select the RawAutoComplete field
6. Start typing
7. Click on an option to select it
### Expected results
The dropdown option displayed in the interface should be selected
### Actual results
The UI content under the dropdown is selected
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
// Auto-enable accessibility for our Blind and Low Vision customers (see
// https://docs.flutter.dev/development/accessibility-and-localization/accessibility#screen-readers).
RendererBinding.instance.ensureSemantics();
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
Widget build(BuildContext context) {
return Scaffold(
drawer: SafeArea(
child: Container(
width: 350,
decoration: const BoxDecoration(color: Colors.white),
child: CustomScrollView(
slivers: [
SliverList.list(
children: List.filled(
5,
const ExpansionTile(title: Text("TESTTILE")),
))
],
),
),
),
body: CustomScrollView(
slivers: [
SliverList.list(
children: (List.filled(
5,
const ExpansionTile(title: Text("TESTTILE")) as Widget,
)) +
[const AutocompleteDemo()] +
List.filled(
5,
const ExpansionTile(title: Text("TESTTILE")) as Widget,
),
)
],
),
// This trailing comma makes auto-formatting nicer for build methods.
);
}
}
class AutocompleteDemo extends StatelessWidget {
const AutocompleteDemo({super.key});
@override
Widget build(BuildContext context) {
return Autocomplete<String>(
optionsBuilder: (TextEditingValue textEditingValue) {
return const <String>["One", "Two", "Three", "Four", "Five"];
},
fieldViewBuilder:
(context, textEditingController, focusNode, onFieldSubmitted) =>
TextField(
controller: textEditingController,
focusNode: focusNode,
onSubmitted: (_) => onFieldSubmitted(),
decoration: const InputDecoration(
border: OutlineInputBorder(),
hintText: 'Type something',
),
),
optionsViewBuilder: (context, onSelected, options) => Material(
elevation: 4.0,
child: ListView(
children: options
.map((String option) => ListTile(
title: Text(option),
onTap: () {
onSelected(option);
},
))
.toList(),
),
),
onSelected: (String selection) {
print('You just selected $selection');
},
);
}
}
class RawAutocompleteDemo extends StatelessWidget {
const RawAutocompleteDemo({super.key});
@override
Widget build(BuildContext context) {
return RawAutocomplete<String>(
optionsBuilder: (TextEditingValue textEditingValue) {
return const <String>["One", "Two", "Three", "Four", "Five"];
},
fieldViewBuilder:
(context, textEditingController, focusNode, onFieldSubmitted) =>
TextField(
controller: textEditingController,
focusNode: focusNode,
onSubmitted: (_) => onFieldSubmitted(),
decoration: const InputDecoration(
border: OutlineInputBorder(),
hintText: 'Type something',
),
),
optionsViewBuilder: (context, onSelected, options) => Material(
elevation: 4.0,
child: ListView(
children: options
.map((String option) => ListTile(
title: Text(option),
onTap: () {
onSelected(option);
},
))
.toList(),
),
),
onSelected: (String selection) {
print('You just selected $selection');
},
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
In this video, the only time I swipe forwards is to scroll the view to the bottom.
All other moments, I am tapping the screen directly to try to select the menu items
https://github.com/user-attachments/assets/643c893e-dec1-49e1-9d9a-90cceb81ea17
The next images are the same screen, one with the semantics debugger, from the test code above:


The next images are from our production app with the same idea. You can see here that the dropdown semantics are still there, they are just being placed below in the tree:


</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on Edward’s iPhone in debug mode...
Automatically signing iOS for device deployment using specified development team in Xcode project: MZ7437V6KU
Xcode build done. 16.3s
You may be prompted to give access to control Xcode. Flutter uses Xcode to run your app. If access is not allowed, you can change this through your Settings > Privacy & Security > Automation.
Connecting to VM Service at ws://127.0.0.1:51287/DPnMhMGurwc=/ws
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
Note - I have overriden Android studio so I can use the correct version of java.
```console
➜ flutter doctor
Doctor summary (to see all details, run flutter
doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0
24A335 darwin-arm64, locale en-NZ)
[!] Android toolchain - develop for Android devices
(Android SDK version 34.0.0)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install
"cmdline-tools;latest"`
See
https://developer.android.com/studio/command-li
ne for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to
accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup
for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[!] Android Studio (version unknown)
✗ Unable to determine Android Studio version.
✗ android-studio-dir = /
✗ Android Studio not found at /Contents
[✓] VS Code (version 1.95.0)
[✓] Connected device (4 available)
[✓] Network resources
! Doctor found issues in 2 categories.
```
</details>
| framework,a: accessibility,has reproducible steps,P2,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | low | Critical |
2,625,741,686 | svelte | docs improvements | ### Describe the problem
implement suggestions from #ambassadors
### Describe the proposed solution
- untrack
- how to use interfaces with $props
- how to type a component (ReturnType<typeof Component>)
- links between tutorial and docs
### Importance
nice to have | documentation | low | Minor |
2,625,761,482 | godot | Shadow Caster Mask does not affect LightmapGI baking for lights with Static bake mode | - *Related to https://github.com/godotengine/godot/issues/56611.*
### Tested versions
- Reproducible in: 4.4.dev https://github.com/godotengine/godot/commit/7187c251da3fabbfe1dbc87d2c33692bb6d9823b
### System information
Godot v4.4.dev (7187c251d) - Fedora Linux 41 (KDE Plasma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 560.35.03) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
The [Light3D Shadow Caster Mask property](https://github.com/godotengine/godot/pull/85338) does not affect lights with Static bake mode when baking lightmaps.
Like https://github.com/godotengine/godot/issues/56611, this may be considered to be intended (as disabling shadow casting for certain objects may be done for performance reasons, which is a moot point for an offline baking process). However, this can be an issue if the shadow casting is disabled for aesthetic reasons or for accurate [shadowmask baking](https://github.com/godotengine/godot/pull/85653) (for DirectionalLight3D).
### Real-time light

### Baked light with Static bake mode
*Notice how some objects that previously didn't cast shadows now cast shadows.*

### Steps to reproduce
- Create meshes, some on visual layer 1, some on visual layer 2.
- Create lights with shadows enabled. Set their shadow caster mask to affect only layer 1.
- Set lights to **Static** bake mode. Before baking, notice that certain objects don't cast shadows (which is expected).
- Bake lightmaps. Notice how all objects cast shadows now.
### Minimal reproduction project (MRP)
[test_shadow_caster_mask_lightmap.zip](https://github.com/user-attachments/files/17580783/test_shadow_caster_mask_lightmap.zip)
| bug,discussion,topic:rendering,topic:3d | low | Major |
2,625,818,320 | godot | LightmapGI leaks on box PrimitiveMeshes due to poor UV2 generation quality | ### Tested versions
- Reproducible in: 4.4.dev https://github.com/godotengine/godot/commit/7187c251da3fabbfe1dbc87d2c33692bb6d9823b
### System information
Godot v4.4.dev (7187c251d) - Fedora Linux 41 (KDE Plasma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 560.35.03) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
LightmapGI leaks on box PrimitiveMeshes due to poor UV2 generation quality. This does not occur when using scenes imported from glTF files (created using Blender), regardless of whether multiple cubes were assembled together in Godot or the scene design was done in Blender entirely as a single object. This occurs even at Ultra quality with the denoiser disabled, and bicubic sampling disabled in the Project Settings.
The issue is less noticeable at higher texel scales, since this is resolution-dependent.
The geometry on all scenes is 100% identical, down to the triangulation order for the *Multiple Imported Scenes* example. The *Single Imported Scene* has a different triangulation order for its boxes, but that shouldn't affect UV2 generation quality noticeably.
I haven't tested whether this occurs with other PrimitiveMeshes, but BoxMesh is likely the most prominent example.

cc @BastiaanOlij
### Scene assembled using box PrimitiveMeshes
*Notice the light leaking on boxes' edges, or overly dark edges.*

### Scene assembled using boxes exported from Blender
*Notice the lack of light leaking or overly dark edges.*

### Steps to reproduce
- Create a MeshInstance3D with a BoxMesh and enable **Add UV2** in the BoxMesh resource's properties.
- Duplicate the MeshInstance3D a few times to create a basic layout.
- Create a Cube in Blender and export it to glTF. Import it in Godot, with its global illumination mode set to **Static Lightmaps** in the Import dock.
- Duplicate the imported cube a few times to create the same basic layout as above.
- Create a LightmapGI node and bake lightmaps.
### Minimal reproduction project (MRP)
[test_lightmap_uv2.zip](https://github.com/user-attachments/files/17581023/test_lightmap_uv2.zip)
| bug,topic:rendering,topic:3d | medium | Major |
2,625,824,915 | PowerToys | PowerToys软件启动时,如果在重命名桌面文件夹的时候切换输入法,就会报错。 | ### Microsoft PowerToys version
0.79.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
1. 保证PowerToys软件运行中。
2. 桌面新建文件夹,尝试命名或者对已有文件夹重命名。
3. ctrl + 空格键 切换微软拼音中英文,此时bug就会出现。
### ✔️ Expected Behavior
1. 保证PowerToys软件运行中。
2. 桌面新建文件夹,尝试命名或者对已有文件夹重命名。
3. ctrl + 空格键 切换微软拼音中英文,正常切换。
### ❌ Actual Behavior
1. 保证PowerToys软件运行中。
2. 桌面新建文件夹,尝试命名或者对已有文件夹重命名。
3. ctrl + 空格键 切换微软拼音中英文,弹出错误弹窗提示,输入中止。
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,625,835,076 | go | net: ListenMulticastUDP fails if Ethernet is unplugged | ### Go version
go version 1.22.8 linux/amd64
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
My VirtualBox VM has two Ethernet interfaces - enp0s3 and enp0s8. I have noticed a few issues with how `net.ListenMulticastUDP` works, all seemingly due to how it does `IP_ADD_MEMBERSHIP`.
```go
package main
import (
"fmt"
"net"
"syscall"
)
func main() {
intf, err := net.InterfaceByName("enp0s3") // or enp0s8, depending on the test
if err != nil {
panic(err)
}
addr := &net.UDPAddr{IP: net.IPv4(239, 255, 255, 250).To4(), Port: 1900}
c, err := net.ListenMulticastUDP("udp", intf, addr)
if err != nil {
panic(err)
}
defer c.Close()
buf := make([]byte, 1000)
for {
n, _, err := c.ReadFrom(buf)
if err != nil {
panic(err)
}
fmt.Println(string(buf[:n]))
}
}
```
### What did you see happen?
If I run the above code after telling VirtualBox to disconnect the cable from enp0s3, I get:
> panic: listen udp 239.255.255.250:1900: setsockopt: no such device
I have tracked the error down to this specific call.
https://github.com/golang/go/blob/6d39245514c675cdea5c7fd7f778e97bf0728dd5/src/net/sockoptip_posix.go#L19
Meanwhile, if I change the sample program to bind to enp0s8 instead, and I tell VirtualBox to disconnect enp0s8 (and re-connect enp0s3), what actually happens is it ends up bound to enp0s3, as confirmed by `netstat`.
> $ netstat -gn | grep 239.255.255.250
enp0s3 1 239.255.255.250
I believe what is happening is that `setIPv4MreqToInterface` ends up leaving `mreq.Interface` as 0.0.0.0 because the interface in question has no IP addresses when the cable is unplugged. Since enp0s3 is my default interface:
1. If I intend to bind to enp0s3 and its cable is unplugged, the kernel is unable to do whatever it needs to (due to some missing route table entry?) and gives the aforementioned error.
2. If I intend to bind to enp0s8 and its cable is unplugged, the kernel defaults to enp0s3 instead.
It is worth noting we have also observed this issue in "the real world" (i.e., not in a VM) - `net.ListenMulticastUDP` will hard fail if the kernel has no route table entry at that moment, which can happen if the Ethernet cable happens to be unplugged.
My experiments show that using `SetsockoptIPMreqn` instead of `SetsockoptIPMreq` to do `IP_ADD_MEMBERSHIP` (and passing the interface index instead of its IP address) seems to make it work as expected in both cases.
```go
syscall.SetsockoptIPMreqn(int(fd), syscall.IPPROTO_IP, syscall.IP_ADD_MEMBERSHIP, &syscall.IPMreqn{
Multiaddr: [4]byte{addr.IP[0], addr.IP[1], addr.IP[2], addr.IP[3]},
Ifindex: int32(intf.Index),
})
```
I also noticed that x/net uses `MCAST_JOIN_GROUP` instead of `IP_ADD_MEMBERSHIP`.
https://github.com/golang/net/blob/f35fec92ec9213ee211cf45f451a5970386f7978/ipv4/sys_linux.go#L34
However, that sockopt is entirely undocumented and seems to be an internal detail of the kernel.
### What did you expect to see?
I expected `net.ListenMulticastUDP` to work as advertised regardless of the state of the Ethernet cable.
In particular, on Linux it should use `SetsockoptIPMreqn` so that it always does the right thing. On non-Linux, it should fail if a `net.Interface` was provided and the interface in question has no IP address. | NeedsInvestigation | low | Critical |
2,625,841,712 | ant-design | TreeNode支持自定义title | ### What problem does this feature solve?
支持自定义原生title提示,现在的title就是treenode显示的内容,无法再更改使hover title显示不同内容,尝试通过titleRender自定义,但是对于DirectoryTree则效果不理想,因为title属性是添加到ant-tree-node-content-wrapper上的,会导致出同一行现不同的title
https://stackblitz.com/edit/react-jpqbhi?file=demo.tsx
https://github.com/user-attachments/assets/ef293502-31bb-45ac-8e8b-25f44c1951c8
### What does the proposed API look like?
类型Select Options title和label属性是分开的
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,625,848,313 | ant-design | table支持虚拟横向滚动 | ### What problem does this feature solve?
在列数较多时,能感觉到明显的卡顿现象,如超过30列,且里面存在svg图标的情况下,虚拟滚动的性能直线下降,实测无法保持60FPS,望修改
### What does the proposed API look like?
virtual={true} scroll={{x: 1920, y: 500}}
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 💡 Feature Request | low | Minor |
2,625,850,666 | vscode | Cut/paste broken for CodeMirror editor in extension webview after updating VSCode | Type: <b>Bug</b>
- Create an extension using a webview that mounts a CodeMirror editor.
- Cut/paste doesn't work, though they do in version 1.94.
- Paste events are not emitted for CodeMirror's contenteditable div.
Strangely enough, duplicating the div or making another contenteditable div works. CodeMirror seems mostly event driven, so I tried to receive paste events by deleting existing event handlers, but didn't get anywhere. I'm hoping someone knows what changed inside VSCode to treat this situation differently.
Here's a simple **[example extension](https://github.com/canislupaster/codemirror-vscode-extension-sample)** based off the Cat Coding webview sample. Thank you for reading.
VS Code version: Code - Insiders 1.96.0-insider (ea6e8e22fcc0f6c4511fc7639a9f4a0d53209b5d, 2024-10-30T23:45:51.779Z)
OS version: Darwin x64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 x 2300)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|10, 8, 6|
|Memory (System)|32.00GB (3.90GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details>Extensions: none (except one being developed)<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
vscrpc:30624061
962ge761:30841072
pythongtdpath:30726887
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
945dj816:31013170
dvdeprecation:31040973
dwnewjupyter:31046869
2f103344:31071589
legacy_priority:31057981
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-t:31151552
cf971741:31144450
e80f6927:31120813
12bdf347:31141542
iacca1:31150324
notype1:31143044
dwcopilot:31158714
g7688163:31155431
```
</details>
<!-- generated by issue reporter --> | bug,webview | low | Critical |
2,625,852,982 | rust | Coroutine/Future `Send` bounds are too restrictive when taking references to `!Sync` values | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
fn main() {
let foo: &dyn Send = &async {
let cell = &std::cell::Cell::new(1234);
async {}.await;
cell.set(5678);
};
}
```
I expected to see this happen: Should compile successfully, like it does without the reference to Cell.
Instead, this happened: Compilation fails. Removing the intermediate reference to the Cell allows this to compile.
---
This is a fundamental limitation between `Send` and coroutine yield points. A Cell owned by a coroutine will not observe any data races as references to that Cell cannot (soundly) escape the coroutine across yield points. Yield points could use an alternative autotrait that represents "thread pinning". | T-lang,T-compiler,C-bug,A-coroutines,WG-async,E-needs-investigation | low | Critical |
2,625,874,470 | vscode | Latest vscode update download == "code_1.95.0-1730153583_amd64.deb" yields error. And is wrong architecture for my Chromebook == arm64 | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes!
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
About VSCODE:
Version: 1.94.2
Commit: 384ff7382de624fb94dbaf6da11977bba1ecd427
Date: 2024-10-09T16:08:44.566Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Linux arm64 6.6.46-04024-g9e7e147b4900
- VS Code Version: Version: 1.94.2
- OS Version: Linux arm64 6.6.46-04024-g9e7e147b4900
Steps to Reproduce:
1. Click on VScode Gear icon
2. Click Download Update ==yields==> code_1.95.0-1730153583_amd64.deb in Download folderf
3. In Download folder, Dbl-Click on file: "code_1.95.0-1730153583_amd64.deb"
4. Up comes dialog:
Details
Application: code
Version: 1.95.0-1730153583
Description: Code editing. Redefined.
Visual Studio Code is a new choice of tool that combines the simplicity of a code editor with what developers need for the core edit-build-debug cycle. See https://code.visualstudio.com/docs/setup/linux for installation instructions and FAQ.
5. click Install button
6. error says:
The following packages have unmet dependencies:
code: Depends: libasound2 (>= 1.0.17) but it is not installable
Depends: libatk-bridge2.0-0 (>= 2.5.3) but it is not installable
Depends: libatk1.0-0 (>= 2.2.0) but it is not installable
Depends: libatspi2.0-0 (>= 2.9.90) but it is not installable
Depends: libc6 (>= 2.14) but it is not installable
Depends: libc6 (>= 2.16) but it is not installable
Depends: libc6 (>= 2.17) but it is not installable
Depends: libc6 (>= 2.2.5) but it is not installable
Depends: libc6 (>= 2.25) but it is not installable
Depends: libc6 (>= 2.28) but it is not installable
Depends: libcairo2 (>= 1.6.0) but it is not installable
Depends: libcurl3-gnutls but it is not installable or
libcurl3-nss but it is not installable or
libcurl4 but it is not installable or
libcurl3 but it is not installable
Depends: libdbus-1-3 (>= 1.9.14) but it is not installable
Depends: libdrm2 (>= 2.4.75) but it is not installable
Depends: libexpat1 (>= 2.1~beta3) but it is not installable
Depends: libgbm1 (>= 17.1.0~rc2) but it is not installable
Depends: libglib2.0-0 (>= 2.37.3) but it is not installable
Depends: libgtk-3-0 (>= 3.9.10) but it is not installable
Depends: libgtk-3-0 (>= 3.9.10) but it is not installable or
libgtk-4-1 but it is not installable
Depends: libnspr4 (>= 2:4.9-2~) but it is not installable
Depends: libnss3 (>= 2:3.30) but it is not installable
Depends: libnss3 (>= 3.26) but it is not installable
Depends: libpango-1.0-0 (>= 1.14.0) but it is not installable
Depends: libx11-6 but it is not installable
Depends: libx11-6 (>= 2:1.4.99.1) but it is not installable
Depends: libxcb1 (>= 1.9.2) but it is not installable
Depends: libxcomposite1 (>= 1:0.4.4-1) but it is not installable
Depends: libxdamage1 (>= 1:1.1) but it is not installable
Depends: libxext6 but it is not installable
Depends: libxfixes3 but it is not installable
Depends: libxkbcommon0 (>= 0.5.0) but it is not installable
Depends: libxkbfile1 (>= 1:1.1.0) but it is not installable
Depends: libxrandr2 but it is not installable
Recommends: libvulkan1 but it is not installable
7. THE END | bug,install-update,linux | low | Critical |
2,625,917,010 | godot | Cannot use `Add Extra Call Argument` and `Unbind Signal Arguments` at the same time in editor | ### Tested versions
- Reproducible in v4.4.dev3.official [f4af8201b]
### System information
Windows 11, Vulkan API 1.3.280 - Forward+ - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 3060 Ti
### Issue description
When connect a method to a signal in the editor, we can not use `Add Extra Call Argument` and `Unbind Signal Arguments` at the same time. As long as we set a number ( number > 0) to the `Unbind Signal Arguments`, the Color of the `Add Extra Call Argument` area will become gray and any extra arguments will be discard after confirmation.
They should be able to use at the same time as the code does:
```
func _ready() -> void:
var callable = test.bind(true, true, true).unbind(1)
custom_signal.connect(callable)
custom_signal.emit(1, 2, 3)
```


<img width="329" alt="屏幕截图 2024-10-31 114702" src="https://github.com/user-attachments/assets/bd65ecf8-88d2-4a03-bf48-dc4c26e4eb9d">
### Steps to reproduce
1. create a 2D node scene and attach a script with the code below
2. connect the custom_signal and add some extra arguments and set the unbind signal argument field
```
extends Node2D
signal custom_signal(a, b, c)
func test(a, b, c, d, e):
printt(a, b, c, d, e)
#func _ready() -> void:
#var callable = test.bind(true, true, true).unbind(1)
#custom_signal.connect(callable)
#custom_signal.emit(1, 2, 3)
```
### Minimal reproduction project (MRP)
NA | enhancement,discussion,topic:editor | low | Major |
2,625,920,335 | puppeteer | [Bug]: page.createCDPSession() fails when running Puppeteer in Chrome extensions | ### Minimal, reproducible example
```TypeScript
import {
connect,
ExtensionTransport,
} from 'puppeteer-core/lib/esm/puppeteer/puppeteer-core-browser.js';
const tab = await chrome.tabs.create({
url,
});
const browser = await connect({
transport: await ExtensionTransport.connectTab(tab.id),
});
const [page] = await browser.pages();
const target = await page.createCDPSession(); // try to get the CDPSession.
browser.disconnect();
```
### Background
Running Puppeteer in Chrome extensions.
### Expectation
Get the CDPSession.
### Reality
When `const target = await page.createCDPSession();` runs, it fails and fires the log below.
```
// error log for `const target = await page.createCDPSession();`
puppeteer:protocol:SEND ►: {"method":"Target.attachToTarget","params":{"targetId":"pageTargetId","flatten":true},"id":5}
puppeteer:protocol:RECV ◀: {"id":5,"sessionId":"pageTargetSessionId","method":"Target.attachToTarget","error":{"message":"{\"code\":-32000,\"message\":\"Not allowed\"}"}}
```
I also notice a error log **befor** it , I don't know if it's relevant.
```
puppeteer:protocol:SEND ►: {"method":"Runtime.runIfWaitingForDebugger","id":2,"sessionId":"tabTargetSessionId"}
// omit some logs
puppeteer:protocol:RECV ◀: {"id":2,"sessionId":"tabTargetSessionId","method":"Runtime.runIfWaitingForDebugger","error":{"message":"{\"code\":-32001,\"message\":\"Session with given id not found.\"}"}}
puppeteer:error: ProtocolError: Protocol error (Runtime.runIfWaitingForDebugger): {"code":-32001,"message":"Session with given id not found."}
at <instance_members_initializer> (CallbackRegistry.js:89:14)
at new Callback (CallbackRegistry.js:93:16)
at CallbackRegistry.create (CallbackRegistry.js:19:26)
at Connection._rawSend (Connection.js:86:26)
at CdpCDPSession.send (CDPSession.js:63:33)
at #onAttachedToTarget (ChromeTargetManager.js:283:21)
at listener (ChromeTargetManager.js:123:42)
at mitt.js:36:7
at Array.map (<anonymous>)
at Object.emit (mitt.js:35:20)
```
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
23.6.1
### Node version
20.18.0
### Package manager
pnpm
### Package manager version
9.12.2
### Operating system
Windows | bug,upstream,confirmed,P3 | low | Critical |
2,625,949,151 | tensorflow | Significant Discrepancy in `tf.linalg.triangular_solve` Results Between CPU and GPU | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.18.0-rc2-4-g6550e4bd802 2.18.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 22.04.4 LTS x86_64
### Mobile device
_No response_
### Python version
3.10.0
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When using `tf.linalg.triangular_solve` with large matrices or specific triangular matrix conditions (e.g., `upper=True`, `transpose=True`, `unitriangular=True`), the GPU results significantly differ from the CPU results.
The discrepancy includes extremely large Mean Absolute Error (MAE) values and infinite Mean Squared Error (MSE) values of the results, indicating a possible issue in the GPU implementation of the function.
### Standalone code to reproduce the issue
[colab](https://colab.research.google.com/drive/1zXcOAkxHV6snaMyWGMeKIukf5IVBd0Bh?usp=sharing)
[Safe Tensors](https://drive.google.com/file/d/1n1JyugXNAS9M2wmnk9u9vCVuHz0uE3R6/view?usp=sharing)
```python
import tensorflow as tf
import numpy as np
from safetensors.torch import load_file
def set_seed(seed_value=42):
"""Sets the random seed for reproducibility."""
np.random.seed(seed_value)
tf.random.set_seed(seed_value)
def tensorflow_version(input, cpu=True):
set_seed()
if cpu:
device_string = "/cpu:0"
else:
device_string = "/gpu:0"
with tf.device(device_string):
b_tensor = tf.constant(input["b"])
A_tensor = tf.constant(input["A"])
upper = input.get("upper", True)
transpose = input.get("transpose", False)
unitriangular = input.get("unitriangular", False)
solution = tf.linalg.triangular_solve(
A_tensor, b_tensor, lower=not upper, adjoint=transpose
)
return {"triangular_solve_solution": solution.numpy()}
def load_safe_tensor(file_path):
safe_tensor_data = load_file(file_path)
A_tensor = safe_tensor_data["A"]
b_tensor = safe_tensor_data["b"]
return {
"A": tf.convert_to_tensor(A_tensor.numpy()),
"b": tf.convert_to_tensor(b_tensor.numpy()),
}
def calculate_differences(cpu_result, gpu_result):
diff = np.abs(cpu_result - gpu_result)
mae = np.mean(diff)
mse = np.mean(diff**2)
rmse = np.sqrt(mse)
max_diff = np.max(diff)
mean_relative_diff = np.mean(diff / (np.abs(cpu_result) + 1e-10))
return {
"Mean Absolute Error": mae,
"Mean Squared Error": mse,
"Root Mean Squared Error": rmse,
"Maximum Absolute Difference": max_diff,
"Mean Relative Difference": mean_relative_diff,
}
def main():
file_path = "tensorflow_triangular_solve_3.safetensors"
input_data = load_safe_tensor(file_path)
input_data["upper"] = True
input_data["transpose"] = True
input_data["unitriangular"] = True
result_cpu = tensorflow_version(input_data, cpu=True)
print("CPU Result:")
print(result_cpu)
if tf.config.list_physical_devices("GPU"):
result_gpu = tensorflow_version(input_data, cpu=False)
print("GPU Result:")
print(result_gpu)
if result_gpu:
cpu_solution = result_cpu["triangular_solve_solution"]
gpu_solution = result_gpu["triangular_solve_solution"]
differences = calculate_differences(cpu_solution, gpu_solution)
for key, value in differences.items():
print(f"{key}: {value}")
else:
print("GPU not available.")
if __name__ == "__main__":
main()
```
#### **Issue Summary:**
- The `tf.linalg.triangular_solve` function produces vastly different results on the GPU compared to the CPU. The differences include an enormous Mean Absolute Error (MAE) and an infinite Mean Squared Error (MSE), indicating a severe discrepancy between the CPU and GPU results.
- The issue appears to be related to the use of specific arguments like `upper=True`, `transpose=True`, and `unitriangular=True`, suggesting that there might be a numerical precision or stability issue in the GPU implementation.
- It is unclear why these large discrepancies occur, but they point to a critical inconsistency that could impact numerical reliability for users depending on TensorFlow’s triangular solve functionality.
### Relevant log output
```shell
Mean Absolute Error: 1.566790629150662e+21
Mean Squared Error: inf
Root Mean Squared Error: inf
Maximum Absolute Difference: 1.4265324671452624e+26
```
| type:bug,comp:ops,TF 2.18 | medium | Critical |
2,625,957,425 | langchain | ChatAnthropicVertex not respecting `api_endpoint` or `base_url` | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import logging
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
force=True,
)
from langchain_google_vertexai.model_garden import ChatAnthropicVertex
chat = ChatAnthropicVertex(
model_name="claude-3-5-sonnet@20240620",
project="thecoo-data-platform",
base_url='https://example.com'
)
chat.invoke("hello")
```
### Error Message and Stack Trace (if applicable)
```
2024-10-31 13:30:21.065 DEBUG _trace - trace: receive_response_headers.started request=<Request [b'POST']>
2024-10-31 13:30:21.996 DEBUG _trace - trace: receive_response_headers.complete return_value=(b'HTTP/1.1', 400, b'Bad Request', [(b'Vary', b'Origin'), (b'Vary', b'X-Origin'), (b'Vary', b'Referer'), (b'Content-Type', b'application/json; charset=UTF-8'), (b'Content-Encoding', b'gzip'), (b'Date', b'Thu, 31 Oct 2024 04:30:21 GMT'), (b'Server', b'scaffolding on HTTPServer2'), (b'Cache-Control', b'private'), (b'X-XSS-Protection', b'0'), (b'X-Frame-Options', b'SAMEORIGIN'), (b'X-Content-Type-Options', b'nosniff'), (b'Alt-Svc', b'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000'), (b'Transfer-Encoding', b'chunked')])
2024-10-31 13:30:21.998 INFO _client - _send_single_request: HTTP Request: POST https://us-central1-aiplatform.googleapis.com/v1/projects/project_id/locations/us-central1/publishers/anthropic/models/claude-3-5-sonnet@20240620:rawPredict "HTTP/1.1 400 Bad Request"
2024-10-31 13:30:21.999 DEBUG _trace - trace: receive_response_body.started request=<Request [b'POST']>
2024-10-31 13:30:22.001 DEBUG _trace - trace: receive_response_body.complete
2024-10-31 13:30:22.002 DEBUG _trace - trace: response_closed.started
2024-10-31 13:30:22.003 DEBUG _trace - trace: response_closed.complete
........
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://us-central1-aiplatform.googleapis.com/v1/projects/project_id/locations/us-central1/publishers/anthropic/models/claude-3-5-sonnet@20240620:rawPredict'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
```
### Description
[the doc of ChatAnthropicVertex](https://api.python.langchain.com/en/latest/google_vertexai/model_garden/langchain_google_vertexai.model_garden.ChatAnthropicVertex.html#langchain_google_vertexai.model_garden.ChatAnthropicVertex) mentions the `api_endpoint` or `base_url` parameter, but they are not affecting (I tried both).
As can be seen from the logs, http request went to `https://us-central1-aiplatform.googleapis.com`
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
> Python Version: 3.11.0 | packaged by conda-forge | (main, Jan 15 2023, 05:44:48) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.3.14
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.129
> langchain_anthropic: 0.2.1
> langchain_elasticsearch: 0.3.0
> langchain_google_vertexai: 2.0.6
> langchain_groq: 0.2.0
> langchain_openai: 0.2.1
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.32
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.8
> anthropic: 0.34.2
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> elasticsearch[vectorstore-mmr]: Installed. No version info available.
> google-cloud-aiplatform: 1.71.0
> google-cloud-storage: 2.18.2
> groq: 0.11.0
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> langgraph-checkpoint: 2.0.0
> numpy: 1.26.4
> openai: 1.53.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
``` | 🤖:bug | low | Critical |
2,625,980,873 | godot | TreeItem.set_custom_as_button doesn't make TreeItem behave as Stylebox | ### Tested versions
Reproducible in 4.3.1.rc
### System information
Windows 10
### Issue description
Custom Button Styles are set in Tree but they take no effect.

Result:

### Steps to reproduce
```
extends Tree
@onready var tree: Tree = $"."
func _ready() -> void:
var child := tree.create_item().create_child()
child.set_cell_mode(0, TreeItem.CELL_MODE_CUSTOM)
child.set_custom_as_button(0, true)
child.set_text(0, "This is text")
```
### Minimal reproduction project (MRP)
[custom_as_button_mrp.zip](https://github.com/user-attachments/files/17582231/custom_as_button_mrp.zip)
two files here with Styles set | bug,topic:gui | low | Minor |
2,626,004,253 | pytorch | inspect.Signature with functools.partial partially applying tensors doesn't work | ### 🐛 Describe the bug
```
import torch
from triton.testing import do_bench
from torch.nn.attention.flex_attention import create_block_mask, flex_attention, noop_mask, BlockMask
import torch.nn.functional as F
import functools
torch.manual_seed(0)
import torch
torch.set_default_device('cuda')
def sliding_window(b, h, q_idx, kv_idx, val):
return (q_idx - kv_idx).abs() < val
sliding_window2 = functools.partial(sliding_window, val=torch.randn(()))
torch.compile(create_block_mask, fullgraph=True)(sliding_window2, None, None, 1024, 1024)
```
```
Traceback (most recent call last):
File "/home/chilli/local/pytorch/torch/_dynamo/variables/misc.py", line 383, in __init__
self.fn = self.inspected.get_function()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/functions.py", line 908, in get_function
return self.as_python_constant()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/functions.py", line 930, in as_python_constant
**{k: v.as_python_constant() for k, v in self.keywords.items()},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/functions.py", line 930, in <dictcomp>
**{k: v.as_python_constant() for k, v in self.keywords.items()},
^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/base.py", line 219, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: TensorVariable() is not a constant
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/chilli/repro.py", line 196, in <module>
block_mask = get_cp_flex_attn_bias("block_causal", tokens, eos_id=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/chilli/repro.py", line 190, in get_cp_flex_attn_bias
cp_flex_attn_bias.block_mask = torch.compile(create_block_mask, fullgraph=True)(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/eval_frame.py", line 554, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 1371, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 544, in __call__
return _compile(
^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 967, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 695, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 728, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
transformations(instructions, code_options)
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 229, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/convert_frame.py", line 657, in transform
tracer.run()
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 2892, in run
super().run()
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1100, in run
while self.step():
^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1012, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 620, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 2399, in CALL
self._call(inst)
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 2393, in _call
self.call_function(fn, args, kwargs)
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 947, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/functions.py", line 345, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/functions.py", line 124, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 953, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 3107, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 3235, in inline_call_
tracer.run()
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1100, in run
while self.step():
^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1012, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 620, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 2399, in CALL
self._call(inst)
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 2393, in _call
self.call_function(fn, args, kwargs)
File "/home/chilli/local/pytorch/torch/_dynamo/symbolic_convert.py", line 947, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/misc.py", line 963, in call_function
return self.fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/misc.py", line 373, in create
return InspectSignatureVariable(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_dynamo/variables/misc.py", line 389, in __init__
unimplemented("inspect.signature with non-constant function")
File "/home/chilli/local/pytorch/torch/_dynamo/exc.py", line 305, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: inspect.signature with non-constant function
```
cc: @williamwen42 @anijain2305
### Versions
N/A
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | high priority,triaged,oncall: pt2,module: dynamo | low | Critical |
2,626,022,646 | rust | [rustdoc] issues of the three-big-buttons | PR #129545 introduced a new style for rustdoc API pages.
I appreciate the author's efforts. But the new style still has a few shortcomings.
- It's not as compact as the old style, more than one line space in vertical is wasted.
- It does not utilize the horizontal space on search-box's right. (Are we really need a so big search box? as nearly wide as screen?)
- The icon+text button is unnecessary, there's well known icons for help/setting/folding, no text is needed here.
- The three-big-buttons may attract user's attention, and distract exploring contents.
- The top-right version+source are moved to top-left, while leaving most other version+sources at right, that makes a big UX pain.
That PR said:
> The settings, help, and summary buttons are also too hard to recognize.
I don't think that's right. Gear-icon for setting, question-mark-icon for help, +/--icon for folding, are widely used all over the world, the meaning is clear. The top-right position of the window is also easy to find and reach. The search box has a tip of '*? for more options*' in its placehold, you may type ‘?’ key to 'click' the help button. There's document in help page to help you find folding button. That's enough, for those low utilization rate buttons. Folding (aka summary) button's icon may use a different color to improve recogniztion.
That PR made those buttons very big, bigger than any other items of the page. That's an UX issue too (as i said above).
**I propose**:
- Make the three-big-buttons smaller, by removing text (and add tooltips), and move them to right of search-box.
- Move top-left version+source back to top-right (below the new three-icon-buttons).
- Change color of folding-button's icon to improve its recogniztion.
- Change order of the new three-icon-buttons: help, setting, folding.
- Improve help page: to match collapse/expand vs summary/show-all vs folding/unfolding.
----
links: [new style](https://doc.rust-lang.org/nightly/std/cell/struct.UnsafeCell.html), [old style](https://doc.rust-lang.org/stable/std/cell/struct.UnsafeCell.html)

----

| C-discussion,A-rustdoc-ui,T-rustdoc-frontend | low | Minor |
2,626,029,305 | flutter | fl_application_test.cc in Linux side of engine may fail to compile due to deprecated macro | Engine commit: c61c6d80fa17a684e527d0a1a9e33b9f121a61a8
Host system: linux-arm64
Target system: linux-arm64
Error:
```
[211/451] CXX obj/flutter/shell/platform/linux/flutter_linux_unittests.fl_application_test.o
FAILED: obj/flutter/shell/platform/linux/flutter_linux_unittests.fl_application_test.o
../../flutter/buildtools/linux-arm64/clang/bin/clang++ -MMD -MFobj/flutter/shell/platform/linux/flutter_linux_unittests.fl_application_test.o.d -DFLUTTER_ENGINE_NO_PROTOTYPES -DFLUTTER_LINUX_COMPILATION -DUSE_OPENSSL=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D_LIBCPP_DISABLE_AVAILABILITY=1 -D_LIBCPP_DISABLE_VISIBILITY_ANNOTATIONS -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DSK_FONTMGR_ANDROID_AVAILABLE -DSK_TYPEFACE_FACTORY_FREETYPE -DSK_FONTMGR_FREETYPE_DIRECTORY_AVAILABLE -DSK_FONTMGR_FREETYPE_EMBEDDED_AVAILABLE -DSK_FONTMGR_FREETYPE_EMPTY_AVAILABLE -DSK_GL -DSK_VULKAN -DSK_CODEC_DECODES_JPEG -DSK_CODEC_DECODES_PNG -DSK_CODEC_DECODES_ICO -DSK_CODEC_DECODES_WEBP -DSK_HAS_WUFFS_LIBRARY -DSK_CODEC_DECODES_GIF -DSK_XML -DFLUTTER_RUNTIME_MODE_DEBUG=1 -DFLUTTER_RUNTIME_MODE_PROFILE=2 -DFLUTTER_RUNTIME_MODE_RELEASE=3 -DFLUTTER_RUNTIME_MODE_JIT_RELEASE=4 -DDART_LEGACY_API=\[\[deprecated\]\] -DFLUTTER_RUNTIME_MODE=1 -DFLUTTER_JIT_RUNTIME=1 -DIMPELLER_DEBUG=1 -DIMPELLER_SUPPORTS_RENDERING=1 -DIMPELLER_ENABLE_OPENGLES=1 -DIMPELLER_ENABLE_VULKAN=1 -DSK_CODEC_DECODES_BMP -DSK_CODEC_DECODES_WBMP -DSK_ENABLE_DUMP_GPU -DSK_FORCE_AAA -DSK_LEGACY_IGNORE_DRAW_VERTICES_BLEND_WITH_NO_SHADER -DSK_DISABLE_LEGACY_METAL_BACKEND_SURFACE -DSK_DISABLE_LEGACY_PARAGRAPH_UNICODE -DSK_USE_LEGACY_BLUR_RASTER -DSK_DISABLE_LEGACY_SHADERCONTEXT -DSK_DISABLE_LOWP_RASTER_PIPELINE -DSK_FORCE_RASTER_PIPELINE_BLITTER -DSK_METAL_WAIT_UNTIL_SCHEDULED -DSK_DISABLE_EFFECT_DESERIALIZATION -DSK_R32_SHIFT=16 -DSK_ENABLE_PRECOMPILE -DSK_GANESH -DSK_USE_PERFETTO -I../.. -Igen -I../../flutter/third_party/libcxx/include -I../../flutter/third_party/libcxxabi/include -I../../flutter/build/secondary/flutter/third_party/libcxx/config -I../../flutter -I../../flutter/third_party/dart/runtime -I../../flutter/third_party/dart/runtime/include -Igen/flutter -Igen/flutter/impeller/runtime_stage -I../../flutter/third_party/flatbuffers/include -I../../flutter/third_party/skia -I../../flutter/third_party/googletest/googlemock/include -I../../flutter/third_party/googletest/googletest/include -fno-strict-aliasing -fstack-protector --param=ssp-buffer-size=8 -fPIC -pipe -pthread --target=aarch64-linux-gnu -DBORINGSSL_CLANG_SUPPORTS_DOT_ARCH -fcolor-diagnostics -Wall -Wextra -Wendif-labels -Werror -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-but-set-parameter -Wno-unused-but-set-variable -Wno-implicit-int-float-conversion -Wno-deprecated-copy -Wno-psabi -Wno-deprecated-literal-operator -Wno-unqualified-std-cast-call -Wno-non-c-typedef-for-linkage -Wno-range-loop-construct -fdebug-prefix-map=/home/ross/flutter-engine/src/= -no-canonical-prefixes -fvisibility=hidden --sysroot=/nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f -Wstring-conversion -Wnewline-eof -O2 -fno-ident -fdata-sections -ffunction-sections -g0 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/x56wmsqb0cwv09iqivb7i9fx9iy5zlkf-gtk+3-3.24.43-dev/include/gtk-3.0 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/as8vpbnch4a9n70x0v22pbmdp0zy0bj5-pango-1.52.2-dev/include/pango-1.0 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/9n0d1issr9pmaqix7jgzp214mlnz4sw7-harfbuzz-9.0.0-dev/include/harfbuzz -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/22xgw9djkc5jrmm08f8qx8ki8a3yghy5-at-spi2-core-2.52.0-dev/include/atk-1.0 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/xya3gcnd2ald7yqw1chvfsbrvfdivykh-cairo-1.18.2-dev/include/cairo -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/sd61i6hj14jvf83brrih1irmg7r0vb3v-freetype-2.13.3-dev/include/freetype2 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/sd61i6hj14jvf83brrih1irmg7r0vb3v-freetype-2.13.3-dev/include -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/i2n1y998igf66wlz6b5yyc5fg1mjp5g7-gdk-pixbuf-2.42.12-dev/include/gdk-pixbuf-2.0 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0 -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/cw9qz08zwd1li8vd8lm0laywa6rsi3gs-glib-2.80.4/lib/glib-2.0/include -isystem../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/60vanm346k9kjx9adzpsqgci7m0z1b3n-libepoxy-1.5.10-dev/include -Wunreachable-code -Wno-newline-eof -fvisibility-inlines-hidden -std=c++17 -fno-rtti -nostdinc++ -nostdinc++ -fvisibility=hidden -fno-exceptions -Wno-inconsistent-missing-override -c ../../flutter/shell/platform/linux/fl_application_test.cc -o obj/flutter/shell/platform/linux/flutter_linux_unittests.fl_application_test.o
../../flutter/shell/platform/linux/fl_application_test.cc:11:38: error: 'G_APPLICATION_FLAGS_NONE' is deprecated: Use 'G_APPLICATION_DEFAULT_FLAGS' instead [-Werror,-Wdeprecated-declarations]
11 | "com.example.TestApplication", G_APPLICATION_FLAGS_NONE);
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/gio/gioenums.h:1545:28: note: 'G_APPLICATION_FLAGS_NONE' has been explicitly marked deprecated here
1545 | G_APPLICATION_FLAGS_NONE GIO_DEPRECATED_ENUMERATOR_IN_2_74_FOR(G_APPLICATION_DEFAULT_FLAGS),
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/gio/gio-visibility.h:858:50: note: expanded from macro 'GIO_DEPRECATED_ENUMERATOR_IN_2_74_FOR'
858 | #define GIO_DEPRECATED_ENUMERATOR_IN_2_74_FOR(f) GLIB_DEPRECATED_ENUMERATOR_FOR (f)
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/glib/gmacros.h:1313:43: note: expanded from macro 'GLIB_DEPRECATED_ENUMERATOR_FOR'
1313 | #define GLIB_DEPRECATED_ENUMERATOR_FOR(f) G_DEPRECATED_FOR(f)
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/glib/gmacros.h:1273:44: note: expanded from macro 'G_DEPRECATED_FOR'
1273 | #define G_DEPRECATED_FOR(f) __attribute__((__deprecated__("Use '" #f "' instead")))
| ^
../../flutter/shell/platform/linux/fl_application_test.cc:16:13: error: 'G_APPLICATION_FLAGS_NONE' is deprecated: Use 'G_APPLICATION_DEFAULT_FLAGS' instead [-Werror,-Wdeprecated-declarations]
16 | G_APPLICATION_FLAGS_NONE);
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/gio/gioenums.h:1545:28: note: 'G_APPLICATION_FLAGS_NONE' has been explicitly marked deprecated here
1545 | G_APPLICATION_FLAGS_NONE GIO_DEPRECATED_ENUMERATOR_IN_2_74_FOR(G_APPLICATION_DEFAULT_FLAGS),
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/gio/gio-visibility.h:858:50: note: expanded from macro 'GIO_DEPRECATED_ENUMERATOR_IN_2_74_FOR'
858 | #define GIO_DEPRECATED_ENUMERATOR_IN_2_74_FOR(f) GLIB_DEPRECATED_ENUMERATOR_FOR (f)
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/glib/gmacros.h:1313:43: note: expanded from macro 'GLIB_DEPRECATED_ENUMERATOR_FOR'
1313 | #define GLIB_DEPRECATED_ENUMERATOR_FOR(f) G_DEPRECATED_FOR(f)
| ^
../../../../../../nix/store/l8f409nhbvgl4xgd3ka5bvqjqdr39l6w-flutter-engine-toolchain-af0f0d559c8a87d912a20971bbd84afc80a54b0f/nix/store/5bhcbbcl7p77zv5hx5bdnfggjlpmqzbq-glib-2.80.4-dev/include/glib-2.0/glib/gmacros.h:1273:44: note: expanded from macro 'G_DEPRECATED_FOR'
1273 | #define G_DEPRECATED_FOR(f) __attribute__((__deprecated__("Use '" #f "' instead")))
| ^
2 errors generated.
[218/451] CXX obj/flutter/shell/platform/embedder/tests/embedder_unittests.embedder_gl_unittests.o
ninja: build stopped: subcommand failed.
```
Responsible file: https://github.com/flutter/engine/blob/c61c6d80fa17a684e527d0a1a9e33b9f121a61a8/shell/platform/linux/fl_application_test.cc
This is caused by GLib 2.80.4 having `G_APPLICATION_FLAGS_NONE` deprecated and replaced by `G_APPLICATION_DEFAULT_FLAGS`. | a: tests,engine,platform-linux,P2,team-linux,triaged-linux | low | Critical |
2,626,049,263 | kubernetes | Refactor system component Metrics: Move away from Global Variables | ### What would you like to be added?
Refactor the current approach to metrics in Kubernetes system components by moving away from global (package-level) variable definitions and avoiding registration in a global store (registry). Instead, implement instance-based metric definitions and registrations to increase modularity and flexibility within components.
Context:
Currently, most metrics for system components are defined as global variables within the respective packages. For example: https://github.com/kubernetes/kubernetes/blob/7c56aa5a58904ec9910b6bf29385a8b85fb115be/staging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go#L65-L297
And they also registered in the global registry(`legacyregistry`), for example https://github.com/kubernetes/kubernetes/blob/7c56aa5a58904ec9910b6bf29385a8b85fb115be/staging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go#L407 .
### Why is this needed?
Though it is convenient to use global variables, it is challenging for testing. For example, in the [integration test](https://github.com/kubernetes/kubernetes/blob/7c56aa5a58904ec9910b6bf29385a8b85fb115be/test/integration/metrics/metrics_test.go) , all metrics are shared. They are initialized once then hard to reset between sub-tests.
Moving to instance-based metrics would enhance maintainability and improve alignment with best practices for modular design in large codebases, supporting Kubernetes’ scalability and reliability goals.
Ref: https://google.github.io/styleguide/go/best-practices.html#global-state | kind/feature,sig/instrumentation,triage/accepted | low | Minor |
2,626,061,571 | ollama | Error: unknown error was encountered while running the model GGML_ASSERT(i01 >= 0 && i01 < ne01) failed | ### What is the issue?
[Nanollava](https://ollama.com/qnguyen3/nanollava) returns `GGML_ASSERT(i01 >= 0 && i01 < ne01) failed` error on chat with image
Output:
```bash
ubuntu@ubuntu:~/workspace$ ollama run qnguyen3/nanollava "tell me what do you see in this picture? ./sample.jpg"
Added image './sample.jpg'
Error: an unknown error was encountered while running the model GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
```
### OS
Linux
### GPU
Other
### CPU
AMD
### Ollama version
0.3.14 | bug | medium | Critical |
2,626,071,441 | transformers | [Feature] Will there be any integration of using Flex-attention (and Paged attention)? | ### Feature request
Using (https://pytorch.org/blog/flexattention/) Flex-attention (and [Paged attention](https://github.com/pytorch/pytorch/pull/121845/files)) to speedup transformers models and provide flexibility
### Motivation
FlexAttention was proposed as a performant attention implementation leveraging torch.compile with easy APIs for adding support for complex attention variants such as Causal, [Relative Positional Embeddings](https://paperswithcode.com/method/relative-position-encodings), [Alibi](https://paperswithcode.com/method/alibi), [Sliding Window Attention](https://mistral.ai/news/announcing-mistral-7b/), [PrefixLM](https://twitter.com/andersonbcdefg/status/1800907703688339569), https://github.com/pytorch/torchtune/pull/875, [Tanh Soft-Capping](https://twitter.com/LysandreJik/status/1807779471891538199), [PagedAttention](https://arxiv.org/abs/2309.06180), etc.
### Your contribution
Not sure. | Feature request | low | Major |
2,626,089,889 | deno | Unable to load `node-rdkafka` | Version: Deno 2.0.4
Steps to reproduce:
1. `pnpm install node-rdkafka`
2. In `main.ts`, `import 'node-rdkafka'`
3. `deno run --allow-all main.ts`
results
`dyld[...]: missing symbol called`
| node compat,node native extension,triage required 👀 | low | Minor |
2,626,102,858 | PowerToys | Workspaces - update option | ### Description of the new feature / enhancement
I like how the workspaces setup the app the way I like them but I would like to have a option to just change the position of already opened app and switching between two layouts.
### Scenario when this would be used?
For example I have a dual monitor setup but sometimes I remote to my desktop from a single monitor setup so I would like to switch to single monitor layout and back to dual monitor layout without opening the app if they are already opened
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,626,108,536 | three.js | WebGLRenderer: Allow for binding, rendering into mipmap storage of textures | ## Description
Rendering custom mipmaps can be valuable for a number of use cases for post processing, stylization, etc but it's not something that three.js supports currently. Use cases include:
- Mipmap generation for non-color data maps such as normal, depth maps (useful for postprocessing, etc).
- Manual generation of per-layer mipmaps in array textures, 3d textures.
- Manual generation of mipmaps to account for wrapping constants
- etc
I think there are a few concept disconnects currently. One is that "generateMipmaps" indicates that both mipmap storage should be generated _and_ the mip chain should be generated. When generating custom mipmaps though these concepts should be separate. Ie you may want an option that says "generateMipmapStorage" and "generateMipmapContents". Or a setting that enumerates the three options. Another is that you cannot currently render into just any textures storage.
cc @CodyJasonBennett
## Solution
These are some solutions that come to mind - there are no doubt others. I can't say these are optimal or align with what's possible in WebGPU but I'll list them here to start the discussion:
**Generating Mipmap Storage w/o Contents**
The `generateMipmaps` setting could be changed to take three options so attaching storage does not implicitly mean generating content: `NO_MIPMAPS` (current `false`), `MIPMAP_STORAGE`, or `MIPMAP_CONTENTS` (current `true`).
**Rendering to Mipmaps** (#29844)
Currently `setRenderTarget` supports taking a `activeMipmapLevel` but as far as I can tell this will only work if the user has specified textures in the `texture.mipmaps` array, is a 3d texture, or cube map. The active mipmap level could also apply to the automatically-generated mipmap storage using the [framebufferTexture2D](https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/framebufferTexture2D).
**Writing to Regular Texture Mipmaps**
The above solutions only really apply to RenderTargets but generating custom mipmaps for regular textures, normals maps, data textures, etc are all relevant. A simple solution would be to enable setting a regular non-rendertarget texture as a depth buffer-less renderable target.
<details>
<summary>Alternatives</summary>
**Generating Mipmap Storage w/o Contents**
To do this currently you can create a render target, initialize it with `generateMipmaps = true`, and then disable it to ensure the storage is available. This however still incurs the overhead of generating mipmaps on creation:
```js
const map = new THREE.WebGLRenderTarget( 32, 32, { generateMipmaps: true } );
renderer.initRenderTarget( map );
map.texture.generateMipmaps = false;
```
**Rendering to Mipmaps / Writing to Regular Texture Mipmaps**
Using `copyTextureToTexture`, custom mipmaps can be generated with render targets and then copied into the appropriate mipmap level. The additions in #29769 allow for copying any existing mip map data, as well.
This solutions incurs unneeded overhead copying and an additional render target, however.
</details>
<details>
<summary>Additional Context</summary>
WebGPU does not support automatic generation of mipmaps: https://github.com/gpuweb/gpuweb/issues/386
The answer to this [stackoverflow question](https://stackoverflow.com/questions/79109103/how-to-copy-specific-mip-map-level-from-a-source-texture-to-a-specific-mip-map-l/79134417#79134417) shows that it's possible to render into a mipmap storage while sampling from the immediate parent mip by setting the `TEXTURE_MAX_LEVEL`, `TEXTURE_BASE_LEVEL`, `TEXTURE_MAX_LOD`. Setting these can probably be left to the user.
</details> | Enhancement | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.