id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,596,095,477 | react-native | Screen jumps when secureTextEntry is set and the keyboard suggests passwords before focusing on the input with secureTextEntry prop | ### Description
The screen jumps when one of the inputs has the secureTextEntry prop because the keyboard suggests passwords even when typing in a different input that doesn’t have secureTextEntry prop. This happens when focusing on an input placed before the one with secureTextEntry, and as you type, the password suggestion appears and disappears, causing the screen to jump.
https://github.com/user-attachments/assets/14403cee-3d9c-4951-855e-7591f18c4fb8
### Steps to reproduce
As you can see in the code, username input doesn't have sceureTextEntry prop set and still suggests passwords above the keyboard, this suggestion appears and disappears as you type causing the app to jump, this only happens when focusing the input just before the one that has the secureTextProp set to true.
`export default function HomeScreen() {
const passRef = useRef<TextInput>(null)
const usernameRef = useRef<TextInput>(null)
return (
<SafeAreaView style={styles.container}>
<KeyboardAvoidingView
behavior={Platform.OS === 'ios' ? 'padding' : 'height'}
style={styles.container}
>
<TouchableWithoutFeedback onPress={Keyboard.dismiss}>
<View style={styles.inner}>
<Text style={styles.header}>Header</Text>
<TextInput
placeholder='Email'
keyboardType='email-address'
returnKeyType='next'
blurOnSubmit={false}
onSubmitEditing={() => usernameRef.current?.focus()}
style={styles.textInput}
/>
<TextInput
ref={usernameRef}
placeholder='Username'
returnKeyType='next'
blurOnSubmit={false}
onSubmitEditing={() => passRef.current?.focus()}
style={styles.textInput}
/>
<TextInput
ref={passRef}
placeholder='Password'
secureTextEntry
style={styles.textInput}
/>
<View style={styles.btnContainer}>
<Button title='Submit' onPress={() => null} />
</View>
</View>
</TouchableWithoutFeedback>
</KeyboardAvoidingView>
</SafeAreaView>
)
}`
### React Native Version
0.74.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (8) arm64 Apple M1
Memory: 819.86 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: ~/.nvm/versions/node/v20.17.0/bin/node
Yarn:
version: 3.6.4
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.17.0/bin/npm
Watchman:
version: 2024.10.14.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.4
- iOS 17.4
- macOS 14.4
- tvOS 17.4
- visionOS 1.1
- watchOS 10.4
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 15.3/15E204a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.5
alejandrovega@Mac-mini-de-Alejandro inputtest % npx react-native info
info Fetching system and libraries information...
System:
OS: macOS 14.6.1
CPU: (8) arm64 Apple M1
Memory: 521.91 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: ~/.nvm/versions/node/v20.17.0/bin/node
Yarn:
version: 3.6.4
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.17.0/bin/npm
Watchman:
version: 2024.10.14.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.4
- iOS 17.4
- macOS 14.4
- tvOS 17.4
- visionOS 1.1
- watchOS 10.4
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 15.3/15E204a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.5
path: /opt/homebrew/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.5
wanted: 0.74.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
No stacktrace error or logs
```
### Reproducer
https://github.com/avega99/SecureTextEntryJump
### Screenshots and Videos
https://github.com/user-attachments/assets/9eb7f18f-7247-4140-9905-a9d1940b48d5

| API: Keyboard,Needs: Triage :mag:,Newer Patch Available | low | Critical |
2,596,098,340 | vscode | Add a basic filter to recently opened workspaces list |
Type: <b>Feature Request</b>
Could we please have a filter to recently opened workspaces list , that can select out workspaces according to which profile was used , where the file location is, etc.
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<!-- generated by issue reporter --> | feature-request,workbench-history | low | Minor |
2,596,109,847 | pytorch | Behavior change of exported_program.run_decompositions | ### 🐛 Describe the bug
Hello. I've been working with aten graph and one of the test uses `copy_` operator. The version that works well is `2.5.0.dev20240828+cpu` nightly version. But, recently, after updating a torch with latest one (`2.6.0.dev20241015+cpu`), there seems to be a behavior change.
```python
class SimpleCopy(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, dst, src):
dst.copy_(src)
return dst
def get_example_inputs(self):
return (torch.randn(5, 5), torch.randn(5, 5))
mod = SimpleCopy()
with torch.no_grad():
exported_program = export(mod.eval(), mod.get_example_inputs())
print (exported_program) # before
exported_program = exported_program.run_decompositions()
print (exported_program) # after
```
### 2.5.0.dev20240828+cpu
```bash
=== before ===
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, dst: "f32[5, 5]", src: "f32[5, 5]"):
# File: test/modules/single/op/copy.py:9 in forward, code: dst.copy_(src)
copy: "f32[5, 5]" = torch.ops.aten.copy.default(dst, src); dst = src = None
return (copy, copy)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='dst'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='src'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_INPUT_MUTATION: 6>, arg=TensorArgument(name='copy'), target='dst'), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='copy'), target=None)])
Range constraints: {}
=== after ===
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, dst: "f32[5, 5]", src: "f32[5, 5]"):
# File: test/modules/single/op/copy.py:9 in forward, code: dst.copy_(src)
copy: "f32[5, 5]" = torch.ops.aten.copy.default(dst, src); dst = src = None
return (copy, copy)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='dst'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='src'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_INPUT_MUTATION: 6>, arg=TensorArgument(name='copy'), target='dst'), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='copy'), target=None)])
Range constraints: {}
```
### 2.6.0.dev20241015+cpu
```bash
=== before ===
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, dst: "f32[5, 5]", src: "f32[5, 5]"):
# File: test/modules/single/op/copy.py:9 in forward, code: dst.copy_(src)
copy: "f32[5, 5]" = torch.ops.aten.copy.default(dst, src); dst = src = None
return (copy, copy)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='dst'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='src'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_INPUT_MUTATION: 6>, arg=TensorArgument(name='copy'), target='dst'), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='copy'), target=None)])
Range constraints: {}
/home/seongwoo/circle-exir/.venv/lib/python3.10/site-packages/torch/export/exported_program.py:1076: UserWarning: This API is deprecated and soon will be removed. Please look at the docstring to see how to preserve an operator.
warnings.warn(
=== after ===
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, dst: "f32[5, 5]", src: "f32[5, 5]"):
# File: test/modules/single/op/copy.py:9 in forward, code: dst.copy_(src)
copy: "f32[5, 5]" = torch.ops.aten.copy.default(dst, src); src = None
# No stacktrace found for following nodes
copy_1: "f32[5, 5]" = torch.ops.aten.copy.default(dst, copy); dst = copy = None
return (copy_1, copy_1)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='dst'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='src'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_INPUT_MUTATION: 6>, arg=TensorArgument(name='copy_1'), target='dst'), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='copy_1'), target=None)])
Range constraints: {}
```
---
Anyway, this is not related with above bug directly but just curiosity, is there any document for `torch.ops.aten.copy.default`? There are two copy ops: `torch.ops.aten.copy.default` and `torch.ops.aten.copy_.default`. And, `torch.ops.aten.copy.default(t1, t2)` returns a new tensor that contains `t2` data. What is this for? Why doesn't it return dst data instead of src one?
### Versions
```bash
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==23.6.5
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-import-order==0.18.2
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] torch==2.5.0.dev20240828+cpu
[pip3] torchvision==0.20.0.dev20240828+cpu
[conda] Could not collect
## after update
# ..
[pip3] torch==2.6.0.dev20241016+cpu
[pip3] torchvision==0.20.0.dev20241016+cpu
# ..
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,596,114,270 | vscode | Allow connecting to unsupported Linux remotes, by use of custom glibc and stdc++ libraries | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
# Context
I'm following up on a change from earlier this year that caused the "Remote - SSH" VS Code extension to become incompatible with Amazon Linux 2 (among others) by enforcing a version requirement on `glibc >= 2.28` and `libstdc++ >= 3.4.25`. As a temporary workaround, Microsoft made a change to allow VSCode to connect to incompatible OS versions until February of 2025.
## Related links:
- https://github.com/microsoft/vscode/issues/201129
- https://github.com/microsoft/vscode/issues/203375
# Request
With February 2025 fast approaching, it would be helpful to clarify what your action plan is for February. A concrete plan will help those of us who rely on VSCode (and the Remote - SSH extension) for our development workflows to understand what our options are. To kick off this line of communication, I have a few questions:
1. Is there any plan to extend the February 2025 timeframe?
1a. If not, will support be going away at the *beginning* or the *end* of February?
2. Will you be making an explicit change to disable legacy compatibility?
2a. What change (or changes) might that be?
2b. Will elements like `/tmp/vscode-skip-server-requirements-check` still exist?
3. Would you be willing to provide a draft release with your planned changes so that we can be proactive in evaluating workarounds?
Closely related to my request for opening up a line of proactive communication, I'd also like to request that the legacy compatibility stay in place to whatever extent it is possible and practical to maintain. For users in many environments, pinning versions and missing out on security patches is a complete non-starter. We'd be forced to move to different tooling at least until we move to new, compatible systems.
As a user, I worry that I'll be "left behind" by VSCode despite being on a mainstream distribution's active LTS release. | feature-request,install-update,remote | medium | Major |
2,596,128,103 | PowerToys | Powertoys Run Not Responding | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
I believe this issue first occurred yesterday (17 Oct 2024 New Zealand Time). I haven't noticed any pattern to this issue. I thought it might happen when my last prompt was opening a file on a network drive, but it has also happened when the last prompt was opening remote desktop.
I can't replicate this issue every time. I think it might occur when it's been a while since I last opened PowerToys Run. Maybe this means Windows is stopping some background services or something?
I see there are other recent reports of PowerToys Run crashing. I haven't had that specific issue where it opens an error log occur.
It seems to be similar to [PowerToys Run freezes for a few seconds when opened sometimes](https://github.com/microsoft/PowerToys/issues/32806) but not quite the same. In my case the window shows as you would expect, not blurred.
Steps to Reproduce:
1. Press Alt + Space to open PowerToys Run
[PowerToysReport_2024-10-18-14-02-03.zip](https://github.com/user-attachments/files/17426282/PowerToysReport_2024-10-18-14-02-03.zip)
### ✔️ Expected Behavior
PowerToys Run opens immediately and I can type a command or search query
### ❌ Actual Behavior
- Sometimes the expected behaviour
- Sometimes PowerToys Run will open and display my previous query but it is frozen and won't respond to any input for 5-10 seconds. After this time, whatever I typed while it was frozen appears.
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Needs-Triage,Needs-Team-Response | low | Critical |
2,596,135,080 | yt-dlp | "[facebook] unable to extract uploader" Can't download video that's Age Restricted | Greetings, I have noticed the same problem as #10479. Even after updating it has the same error, and the video is still available to watch on FB. Although it is apparently [age restricted](https://github.com/yt-dlp/yt-dlp/issues/10479#issuecomment-2420543825). When the video is first loaded, it is blurred out and has
> "Violent or graphic content. This video is covered so people can choose whether they want to see it."
Didn't actually realize that that means it's age restricted, but it makes sense, it's just that FB doesn't actually spell out there "age restricted" so maybe they use slightly different wordage.
When I tried to open it in a private window, it shows a 'broken link' image, and the following:
> "This Video Isn't Available Anymore
The link may be broken or the video may have been removed. You can explore more videos or try logging into facebook.com and then visiting the link again."

Also I'm not sure if I have the right the right syntax for the `--cookies-from-browser`, when I tried to use it, a pop-over appeared on the screen saying that this app can't run on this PC.
```
C:\_______>>yt-dlp.exe --cookies-from-browser brave -F https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
Access is denied.
```
```C:\_______>>yt-dlp.exe https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
[facebook] Extracting URL: https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
[facebook] 2971407446334915: Downloading webpage
WARNING: [facebook] unable to extract uploader; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [facebook] 2971407446334915: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
C:\_______>>yt-dlp.exe -U
Current version: stable@2024.08.06 from yt-dlp/yt-dlp
Latest version: stable@2024.10.07 from yt-dlp/yt-dlp
Current Build Hash: 468a6f8bf1d156ad173e000a40f696d4fbd69c5aa7360229329b9063a388e7d0
Updating to stable@2024.10.07 from yt-dlp/yt-dlp ...
Updated yt-dlp to stable@2024.10.07 from yt-dlp/yt-dlp
C:\_______>>yt-dlp.exe https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
[facebook] Extracting URL: https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
[facebook] 2971407446334915: Downloading webpage
WARNING: [facebook] unable to extract uploader; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [facebook] 2971407446334915: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
C:\_______>>yt-dlp.exe "https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915"
[facebook] Extracting URL: https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
[facebook] 2971407446334915: Downloading webpage
WARNING: [facebook] unable to extract uploader; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [facebook] 2971407446334915: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
Thank you, Shalom.
```C:\_______>>yt-dlp.exe https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915 --list-formats -v
[debug] Command-line config: ['https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915', '--list-formats', '-v']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.07 from yt-dlp/yt-dlp [1a176d874] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 7.0.2-essentials_build-www.gyan.dev (setts), ffprobe 7.0.2-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[facebook] Extracting URL: https://www.facebook.com/tadeodaniel.moracruz/videos/2971407446334915
[facebook] 2971407446334915: Downloading webpage
WARNING: [facebook] unable to extract uploader; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [facebook] 2971407446334915: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2847, in process_video_result
File "yt_dlp\YoutubeDL.py", line 1123, in raise_no_formats
yt_dlp.utils.ExtractorError: [facebook] 2971407446334915: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
_Originally posted by @life777eternal in https://github.com/yt-dlp/yt-dlp/issues/10479#issuecomment-2420346587_
| NSFW,account-needed,site-bug | low | Critical |
2,596,135,787 | flutter | [iOS] Clean up method calls in FlutterViewController initialisers and dealloc | Calling helper methods in initialisers and dealloc is unsafe. Methods should only ever be called on fully initialised objects since methods can be overridden in subclasses.
The iOS FlutterViewController calls methods from both its initialisers and its dealloc. We should fix this, but it's probably also an indication that FlutterViewController is doing too much. | platform-ios,engine,P2,c: tech-debt,team-ios,triaged-ios | low | Minor |
2,596,166,587 | tauri | [feat] Monitoring of window maximized and restored window events | ### Describe the problem
Currently, only events for monitoring size changes are provided. There is no monitoring for events such as maximized, restored the window, minimized, etc. If you need to monitor the events of maximized and restored the window, you can only judge it by listening for changes in the size of the window. It depends on whether the window is maximized and the direction is changed. This will cause the event listener to be triggered frequently. If possible, can you provide a listener for events such as maximized, restored the window, minimized, etc.
### Describe the solution you'd like
```rust
pub enum WindowEvent {
// Ignore other fields
Maximized(bool),
Minimized(bool),
Restored(bool),
}
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,596,235,943 | ollama | It is hoped that it can be compatible with Intel ultra1 and 2 generation chip core display | null | feature request | low | Minor |
2,596,242,773 | pytorch | Cache enablement should be tested inside of cache load() function, not at call site | ### 🐛 Describe the bug
Sample:
```
# Autograd cache stuff
remote = should_use_remote_autograd_cache()
local = should_use_local_autograd_cache()
if local or remote:
compiled_fn = AOTAutogradCache.load(
dispatch_and_compile,
mod,
fake_flat_args,
aot_config,
cudagraphs,
local,
remote,
)
else:
compiled_fn = dispatch_and_compile()
```
I can kind of see why it's written this way: you want to avoid exercising the AOTAutogradCache.load codepath at all which makes it less risky. But I think this is a false economy. Just do the test inside load(). This gives way better information hiding, since cache being enabled or not management is done entirely inside of AOTAutogradCache class, and that's two less functions I have to expose to external calls.
Inductor also suffers from this antipattern which I guess is where the pattern was copied from
```
if (
not config.force_disable_caches
and (config.fx_graph_cache or fx_graph_remote_cache)
and not aot_mode
):
for i, input in enumerate(example_inputs):
if (
isinstance(input, torch.Tensor)
and input.device.type == "cuda"
and i in static_input_idxs
):
input._is_inductor_static = True # type: ignore[attr-defined]
compiled_graph = FxGraphCache.load(
codegen_and_compile,
gm,
example_inputs,
graph_kwargs,
inputs_to_check,
local=config.fx_graph_cache,
remote=fx_graph_remote_cache,
```
cc @chauhang @penguinwu @jamesjwu @oulgen
### Versions
main | triaged,oncall: pt2,compile-cache | low | Critical |
2,596,283,618 | pytorch | max_block['X'] not synced with max XBLOCK size (error shows up in `create_block_mask`) | ### 🚀 The feature, motivation and pitch
```python
import torch
torch.set_default_device('cuda')
from triton.testing import do_bench
from torch.nn.attention.flex_attention import create_block_mask
def local_mask(b, h, q_idx, kv_idx):
return (q_idx - kv_idx).abs() < 512
print(do_bench(lambda: torch.compile(create_block_mask)(local_mask, B=None, H=None, Q_LEN=2**24, KV_LEN=2**24)))
```
```shell
File "/home/chilli/local/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 1376, in triton_config_reduction
check_config(cfg, xnumel=size_hints[0])
File "/home/chilli/local/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 1203, in check_config
assert max_block % block == 0, (
^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: TritonKernel.indexing assumes XBLOCK divides config.triton.max_block["X"] but XBLOCK=8192 and config.triton.max_block["X"]=4096 (cfg={'XBLOCK': 8192, 'RBLOCK': 1}).
```
cc: @oraluben since it's relevant to the last discussion
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @drisspg @yanboliang @BoyuanFeng @ezyang @zou3519 @ydwu4 | triaged,module: inductor | low | Critical |
2,596,293,446 | ant-design | 颜色选择器支持多种渐变 | ### What problem does this feature solve?
希望颜色选择器的渐变模式除了线性渐变外还可以生成径向渐变、锥型渐变和对称渐变
### What does the proposed API look like?
mode 选择器模式,用于配置单色与渐变 ('single' | 'linear-gradient' | 'radial-gradient' | 'conic-gradient')[]
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 💡 Feature Request,Inactive,🧶 Low Priority | low | Minor |
2,596,305,639 | terminal | Subsequent `wt.exe /min` to an existing window restores a minimized window | ### Windows Terminal version
1.22.2702.0
### Windows build number
10.0.19045.4894
### Other Software
_No response_
### Steps to reproduce
new console creating wt.exe respects ```/min```
```start /min wt.exe -w exam``` -> new wt, single tab, minimized
```start /min wt.exe -w exam sp``` -> new wt, single tab, minimized, two panes
### Expected Behavior
```
start /min wt.exe -w exam && ^
(echo "sleep 1.0" && ping -n 2 localhost) > NUL && ^
start /min wt.exe -w exam sp
```
-> minimized wt, single tab, two panes
### Actual Behavior
```
start /min wt.exe -w exam && ^
(echo "sleep 1.0" && ping -n 2 localhost) > NUL && ^
start /min wt.exe -w exam sp
```
-> visible wt, single tab, two panes | Help Wanted,Issue-Bug,Product-Terminal,Area-Remoting,Area-Windowing | low | Minor |
2,596,323,926 | langchain | There seems to be a bug with OpenAIEmbeddings. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [ ] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import OpenAIEmbeddings
#ollama
embeddings = OpenAIEmbeddings(
openai_api_base='http://localhost:11434/v1',
model="nomic-embed-text",
)
vector = embeddings.embed_query("hello")
print(vector[:3])
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/workspace/mywork/langchainproject/test6.py", line 13, in <module>
vector = embeddings.embed_query("hello")
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/langchain_openai/embeddings/base.py", line 629, in embed_query
return self.embed_documents([text])[0]
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/langchain_openai/embeddings/base.py", line 588, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/langchain_openai/embeddings/base.py", line 483, in _get_len_safe_embeddings
response = self.client.create(
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/openai/resources/embeddings.py", line 124, in create
return self._post(
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/openai/_base_client.py", line 1277, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/openai/_base_client.py", line 954, in request
return self._request(
File "/workspace/download/miniforge/envs/langchain/lib/python3.9/site-packages/openai/_base_client.py", line 1058, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'invalid input type', 'type': 'api_error', 'param': None, 'code': None}}
```
### Description
The Embeddings.create method provided by OpenAI supports input parameters of type Union[str, List[str], Iterable[int], Iterable[Iterable[int]]]. However, in the langchain OpenAIEmbeddings class, the _get_len_safe_embeddings method uses _tokenize which may return a type of List[Union[List[int], str]]. This is not a supported type for Embeddings.create.
I believe this to be a bug. Could you please advise on how to handle this issue?
### System Info
from langchain_core import sys_info
sys_info.print_sys_info()
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Apr 7 21:37:58 CST 2022
> Python Version: 3.9.20 | packaged by conda-forge | (main, Sep 30 2024, 17:49:10)
[GCC 13.3.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.135
> langchain_experimental: 0.3.2
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug,investigate | low | Critical |
2,596,358,391 | ui | [feat]: Dynamic add new variants | ### Feature description
### Problem
Currently some component have fixed variant. For example: `const badgeVariants = cva(`, `const buttonVariants = cva(` so I find some solution to add dynamic variant with `minimum cost` to modify ShadCN UI code.
I don't know if this is a good idea before proceeding to modify the code and create a pull request.
This solution need update any component in `/components/ui` that used `= cva(`. So the end-user (developer) cant override variant by update ONLY 1 file `custom-vars.ts`. It need `shadcn tool (npx shadcn@latest)` create/update (question) `/components/ui/custom-vars.ts` if file not exits.
### This is example for update component button
#### Add merge variant function / customVars
For example `customVars.button` is override `button` variants. I use chat GPT, so maybe `_deepMerge` fn work not correctly.
```ts
// /src/components/ui/custom-vars.ts
type GetObjDifferentKeys<
T,
U,
T0 = Omit<T, keyof U> & Omit<U, keyof T>,
T1 = {
[K in keyof T0]: T0[K];
},
> = T1;
type GetObjSameKeys<T, U> = Omit<T | U, keyof GetObjDifferentKeys<T, U>>;
type MergeTwoObjects<
T,
U,
T0 = GetObjDifferentKeys<T, U> & { [K in keyof GetObjSameKeys<T, U>]: DeepMergeTwoTypes<T[K], U[K]> },
T1 = { [K in keyof T0]: T0[K] },
> = T1;
export type DeepMergeTwoTypes<T, U> = [T, U] extends [{ [key: string]: unknown }, { [key: string]: unknown }]
? MergeTwoObjects<NonNullable<T>, NonNullable<U>>
: NonNullable<T> | NonNullable<U>;
function _deepMerge<T extends object, U extends object>(target: T, source: U): DeepMergeTwoTypes<T, U> {
for (const key of Object.keys(source) as Array<keyof U>) {
if (source[key] instanceof Object && key in target) {
(target as any)[key] = _deepMerge((target as any)[key], source[key] as any);
} else {
(target as any)[key] = source[key];
}
}
return target as any;
}
export function mergeVariants<T, U>(baseConfig: T, customConfig: U): DeepMergeTwoTypes<T, U> {
return _deepMerge(baseConfig as any, customConfig as any) as any;
}
export const customVars = {
button: {
variants: {
variant: {
success: 'bg-success text-white hover:bg-success/80',
},
},
},
};
```
#### Modify button.tsx
```ts
import { cn } from '@/lib/utils';
import { Slot } from '@radix-ui/react-slot';
import { cva, type VariantProps } from 'class-variance-authority';
import * as React from 'react';
// UPDATE: Import fn
import { customVars, mergeVariants } from './custom-vars';
const buttonVariants = cva(
'inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium ring-offset-background transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50',
// UPDATED: do mergeVariants
mergeVariants(
{
variants: {
variant: {
default: 'bg-primary text-primary-foreground hover:bg-primary/90',
destructive: 'bg-destructive text-destructive-foreground hover:bg-destructive/90',
outline: 'border border-input bg-background hover:bg-accent hover:text-accent-foreground',
secondary: 'bg-secondary text-secondary-foreground hover:bg-secondary/80',
ghost: 'hover:bg-accent hover:text-accent-foreground',
link: 'text-primary underline-offset-4 hover:underline',
},
size: {
default: 'h-10 px-4 py-2',
sm: 'h-9 rounded-md px-3',
lg: 'h-11 rounded-md px-8',
icon: 'h-10 w-10',
},
},
defaultVariants: {
variant: 'default',
size: 'default',
},
},
customVars.button || {},
),
);
```
#### Usage button
Expected: Code should not throw error for `variant='success'`
```tsx
<Button variant='success'>Success</Button>
```
#### Update CSS
```css
@layer base {
:root {
/* Define new variables */
--success: 100 77% 44%;
--success-foreground: 102 85% 34%;
}
}
```
Update tailwind.config.js
```ts
/** @type {import('tailwindcss').Config} */
export default {
theme: {
extend: {
colors: {
success: {
DEFAULT: 'hsl(var(--success))',
foreground: 'hsl(var(--success-foreground))',
},
}
}
}
}
```
### Affected component/components
Alert, Badge, Button, Label, Sheet, Toast, Toggle
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Critical |
2,596,435,006 | ui | [bug]: creating a new project with bunx has error "Something went wrong creating a new Next.js project. Please try again." | ### Describe the bug
Init new project with `bunx` has issue. Command: `bunx --bun shadcn@latest init -d`
**Output:**
```
✔ The path /Users/username/Developer does not contain a package.json file. Would you like to start a new Next.js project? … yes
✔ What is your project named? … some-app
⠇ Creating a new Next.js project. This may take a few minutes.
Something went wrong creating a new Next.js project. Please try again.
```
**Expected:**
- new project is created
**Note:**
- using other package manager like `npm` works. I haven't tested the rest
### Affected component/components
No
### How to reproduce
1. run `bunx --bun shadcn@latest init -d`
2. type `Y` for starting new project
3. type project name
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Mac OS, M3 Macbook Pro
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,596,454,693 | pytorch | Cannot export Pytorch model (ReDimNet) to ONNX | ### 🐛 Describe the bug
I try to export ReDimNet model from pytorch to ONNX. Please help me this out. The code I use is :
```
import torch
model = torch.hub.load('IDRnD/ReDimNet', 'b3', pretrained=True, finetuned=False)
input_sample = torch.randn(1, 32000)
model.eval()
with torch.no_grad():
onnx_program_redim = torch.onnx.dynamo_export(model, input_sample)
```
Error is
---------------------------------------------------------------------------
Unsupported Traceback (most recent call last)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py:1509, in dynamo_export(model, export_options, *model_args, **model_kwargs)
1503 try:
1504 return Exporter(
1505 options=resolved_export_options,
1506 model=model,
1507 model_args=model_args,
1508 model_kwargs=model_kwargs,
-> 1509 ).export()
1510 except Exception as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py:1236, in Exporter.export(self)
1231 with self.options.diagnostic_context, decomposition_skip.enable_decomposition_skips(
1232 self.options
1233 ), torch._dynamo.config.patch(
1234 dataclasses.asdict(DEFAULT_EXPORT_DYNAMO_CONFIG)
1235 ):
-> 1236 graph_module = self.options.fx_tracer.generate_fx(
1237 self.options, self.model, self.model_args, self.model_kwargs
1238 )
1239 # TODO: Defer `import onnxscript` out of `import torch` path
1240 # https://github.com/pytorch/pytorch/issues/103764
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py:214, in DynamoExport.generate_fx(self, options, model, model_args, model_kwargs)
213 with fake_mode: # type: ignore[attr-defined]
--> 214 graph_module, graph_guard = torch._dynamo.export(
215 wrapped_model,
216 tracing_mode=fx_mode,
217 )(
218 *model_args,
219 **model_kwargs,
220 )
221 del graph_guard # Unused
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:1379, in export.<locals>.inner(*args, **kwargs)
1378 try:
-> 1379 result_traced = opt_f(*args, **kwargs)
1380 except ConstraintViolationError as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:433, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
432 try:
--> 433 return fn(*args, **kwargs)
434 finally:
435 # Restore the dynamic layer stack depth if necessary.
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py:169, in _wrap_model_with_output_adapter.<locals>.wrapped(*args, **kwargs)
167 @functools.wraps(model_func)
168 def wrapped(*args, **kwargs):
--> 169 return output_adapter.apply(model_func(*args, **kwargs), model=model)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:1116, in CatchErrorsWrapper.__call__(self, frame, cache_entry, frame_state)
1114 with compile_lock, _disable_current_modes():
1115 # skip=1: skip this frame
-> 1116 return self._torchdynamo_orig_callable(
1117 frame, cache_entry, self.hooks, frame_state, skip=1
1118 )
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:472, in ConvertFrameAssert.__call__(self, frame, cache_entry, hooks, frame_state, skip)
460 signpost_event(
461 "dynamo",
462 "_convert_frame_assert._compile",
(...)
469 },
470 )
--> 472 return _compile(
473 frame.f_code,
474 frame.f_globals,
475 frame.f_locals,
476 frame.f_builtins,
477 self._torchdynamo_orig_callable,
478 self._one_graph,
479 self._export,
480 self._export_constraints,
481 hooks,
482 cache_entry,
483 cache_size,
484 frame,
485 frame_state=frame_state,
486 compile_id=compile_id,
487 skip=skip + 1,
488 )
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_utils_internal.py:84, in compile_time_strobelight_meta.<locals>.compile_time_strobelight_meta_inner.<locals>.wrapper_function(*args, **kwargs)
83 kwargs["skip"] = kwargs["skip"] + 1
---> 84 return StrobelightCompileTimeProfiler.profile_compile_time(
85 function, phase_name, *args, **kwargs
86 )
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_strobelight/compile_time_profiler.py:129, in StrobelightCompileTimeProfiler.profile_compile_time(cls, func, phase_name, *args, **kwargs)
128 if not cls.enabled:
--> 129 return func(*args, **kwargs)
131 if cls.profiler is None:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/contextlib.py:79, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
78 with self._recreate_cm():
---> 79 return func(*args, **kwds)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:817, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
816 try:
--> 817 guarded_code = compile_inner(code, one_graph, hooks, transform)
818 return guarded_code
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/utils.py:231, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs)
230 t0 = time.time()
--> 231 r = func(*args, **kwargs)
232 time_spent = time.time() - t0
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:636, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
635 try:
--> 636 out_code = transform_code_object(code, transform)
637 break
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py:1185, in transform_code_object(code, transformations, safe)
1183 propagate_line_nums(instructions)
-> 1185 transformations(instructions, code_options)
1186 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:178, in preserve_global_state.<locals>._fn(*args, **kwargs)
177 try:
--> 178 return fn(*args, **kwargs)
179 finally:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:582, in _compile.<locals>.transform(instructions, code_options)
581 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 582 tracer.run()
583 except exc.UnspecializeRestartAnalysis:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2451, in InstructionTranslator.run(self)
2450 def run(self):
-> 2451 super().run()
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:437, in NNModuleVariable.call_function(self, tx, args, kwargs)
436 assert istype(fn, types.FunctionType)
--> 437 return tx.inline_user_function_return(
438 variables.UserFunctionVariable(fn, source=fn_source),
439 args,
440 kwargs,
441 )
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1500, in InstructionTranslatorBase.CALL_FUNCTION_EX(self, inst)
1499 kwargsvars = kwargsvars.keys_as_python_constant()
-> 1500 self.call_function(fn, argsvars.items, kwargsvars)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:344, in UserMethodVariable.call_function(self, tx, args, kwargs)
343 return invoke_and_store_as_constant(tx, fn, self.get_name(), args, kwargs)
--> 344 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:366, in NNModuleVariable.call_function(self, tx, args, kwargs)
365 for child_name, submod in mod._modules.items():
--> 366 tx.call_function(
367 tx.output.register_attr_or_module(
368 submod,
369 self.module_key,
370 child_name,
371 source=NNModuleSource(AttrSource(self.source, child_name)),
372 ),
373 [arg],
374 {},
375 )
376 arg = tx.pop()
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:437, in NNModuleVariable.call_function(self, tx, args, kwargs)
436 assert istype(fn, types.FunctionType)
--> 437 return tx.inline_user_function_return(
438 variables.UserFunctionVariable(fn, source=fn_source),
439 args,
440 kwargs,
441 )
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1500, in InstructionTranslatorBase.CALL_FUNCTION_EX(self, inst)
1499 kwargsvars = kwargsvars.keys_as_python_constant()
-> 1500 self.call_function(fn, argsvars.items, kwargsvars)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:344, in UserMethodVariable.call_function(self, tx, args, kwargs)
343 return invoke_and_store_as_constant(tx, fn, self.get_name(), args, kwargs)
--> 344 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:437, in NNModuleVariable.call_function(self, tx, args, kwargs)
436 assert istype(fn, types.FunctionType)
--> 437 return tx.inline_user_function_return(
438 variables.UserFunctionVariable(fn, source=fn_source),
439 args,
440 kwargs,
441 )
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1500, in InstructionTranslatorBase.CALL_FUNCTION_EX(self, inst)
1499 kwargsvars = kwargsvars.keys_as_python_constant()
-> 1500 self.call_function(fn, argsvars.items, kwargsvars)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:344, in UserMethodVariable.call_function(self, tx, args, kwargs)
343 return invoke_and_store_as_constant(tx, fn, self.get_name(), args, kwargs)
--> 344 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call_(parent, func, args, kwargs)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:665, in SkipFunctionVariable.call_function(self, tx, args, kwargs)
664 msg += f"', {self.reason}'" if self.reason else ""
--> 665 unimplemented(msg)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/_dynamo/exc.py:221, in unimplemented(msg, from_exc)
220 raise Unsupported(msg) from from_exc
--> 221 raise Unsupported(msg)
Unsupported: 'skip function isinstance in file /home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/jit/__init__.py'
from user code:
File "/home/miko/.cache/torch/hub/IDRnD_ReDimNet_master/redimnet.py", line 953, in forward
x = self.spec(x).unsqueeze(1)
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/miko/.cache/torch/hub/IDRnD_ReDimNet_master/redimnet.py", line 116, in forward
x = self.torchfbank(x)+1e-6
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torchaudio/transforms/_transforms.py", line 619, in forward
specgram = self.spectrogram(waveform)
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torchaudio/transforms/_transforms.py", line 110, in forward
return F.spectrogram(
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torchaudio/functional/functional.py", line 119, in spectrogram
frame_length_norm, window_norm = _get_spec_norms(normalized)
File "/home/miko/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torchaudio/functional/functional.py", line 233, in _get_spec_norms
if torch.jit.isinstance(normalized, str):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
OnnxExporterError Traceback (most recent call last)
Cell In[43], line 3
1 model.eval()
2 with torch.no_grad():
----> 3 onnx_program_redim = torch.onnx.dynamo_export(model, input_sample)
File ~/miniconda3/envs/speaker_recog/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py:1520, in dynamo_export(model, export_options, *model_args, **model_kwargs)
1512 resolved_export_options.diagnostic_context.dump(sarif_report_path)
1513 message = (
1514 f"Failed to export the model to ONNX. Generating SARIF report at '{sarif_report_path}'. "
1515 "SARIF is a standard format for the output of static analysis tools. "
(...)
1518 f"Please report a bug on PyTorch Github: {_PYTORCH_GITHUB_ISSUES_URL}"
1519 )
-> 1520 raise OnnxExporterError(
1521 ONNXProgram._from_failure(e, resolved_export_options.diagnostic_context),
1522 message,
1523 ) from e
OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues```
```
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
Stepping: 2
CPU MHz: 2600.000
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 5199.98
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.68
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.17.0
[pip3] onnx-tool==0.9.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.1.0.dev20241011
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] numpy 2.1.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.68 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchaudio 2.4.1 pypi_0 pypi
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | module: onnx,triaged | low | Critical |
2,596,492,140 | pytorch | Error building pytorch from source | ### 🐛 Describe the bug
Hello,
There is error while building pytorch from source.
Steps to reproduce:
1. git clone https://github.com/pytorch/pytorch.git
2. cd pytorch/
3. git submodule sync
4. git submodule update --init --recursive
5. make triton
At step 5 I am getting this error:
```
sudo make triton
pip3 uninstall -y triton
WARNING: Skipping triton as it is not installed.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Looking in indexes: https://download.pytorch.org/whl/nightly/
ERROR: Could not find a version that satisfies the requirement pytorch-triton==3.0.0+45fff310c8 (from versions: none)
ERROR: No matching distribution found for pytorch-triton==3.0.0+45fff310c8
make: *** [Makefile:37: triton] Error 1
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Rocky Linux 9.4 (Blue Onyx) (aarch64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.9.18 (main, Jul 3 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)
Python platform: Linux-5.14.0-427.18.1.el9_4.aarch64-aarch64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 32
Stepping: r3p1
BogoMIPS: 50.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
```
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @malfet @snadampal @milpuz01 @ZainRizvi @kit1980 @huydhn @clee2000 @ezyang @chauhang @penguinwu | module: docs,module: cuda,triaged,module: arm,module: devx | low | Critical |
2,596,512,417 | flutter | [Flutter web] Clipboard.setData will invoke the soft keyboard when used with delay function on mobile brower, only happen in the unsecure website (http and not locahost). | ### Steps to reproduce
1. startup some web server locally, e.g. flutter run -d web-server --web-hostname 0.0.0.0 --web-port 8089
2. use http (not localhost or 127.0.0.1, e.g. network ip) browse to the website with a mobile device browser (e.g. chrome an android device)
3. click the button multiple times in my example
### Expected results
just set data to clipboard.
### Actual results
set data to clipboard and invoke the soft keyboard.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main(List<String> args) {
runApp(const MyWidget());
}
class MyWidget extends StatelessWidget {
const MyWidget({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: ElevatedButton(
onPressed: () async {
await Future.delayed(const Duration(seconds: 1));
await Clipboard.setData(const ClipboardData(
text: "123",
));
},
child: const Text("set clipboard")),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/4f599a3a-b5b9-4737-9f0e-66b9beddfaf0
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.22.2, on Microsoft Windows [版本 10.0.19045.5011], locale zh-CN)
• Flutter version 3.22.2 on channel stable at C:\flutter_sdk\3.22.2
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 761747bfc5 (4 months ago), 2024-06-05 22:15:13 +0200
• Engine revision edd8546116
• Dart version 3.4.3
• DevTools version 2.34.3
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\74161\AppData\Local\Android\sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Android\Android Studio1\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio 生成工具 2019 16.11.10)
• Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools
• Visual Studio 生成工具 2019 version 16.11.32126.315
• Windows 10 SDK version 10.0.19041.0
[!] Android Studio (version 2021.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
X Unable to determine bundled Java version.
• Try updating or re-installing Android Studio.
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio1
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [版本 10.0.19045.5011]
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.103
• Edge (web) • edge • web-javascript • Microsoft Edge 129.0.2792.89
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,platform-web,has reproducible steps,P2,browser: chrome-android,team-web,triaged-web,found in release: 3.24,found in release: 3.27 | low | Major |
2,596,515,852 | deno | messed up terminal after terminating vite server on windows | Version: Deno 2.0.2
In a vite project, run `deno task dev` and then close it using CTRL + C.
The terminal freezes and I cannot type anything.
current:
```
PS C:\Users\dhairy\courses> deno task dev
Task dev vite
VITE v5.4.9 ready in 138 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h + enter to show help
PS C:\Users\dhairy\courses> ^CTerminate batch job (Y/N)?
```
expected (npm):
```
PS C:\Users\dhairy\courses> npm run dev
> courses@0.0.0 dev
> vite
VITE v5.4.9 ready in 159 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h + enter to show help
Terminate batch job (Y/N)? y
PS C:\Users\dhairy\courses>
```
| bug,node compat | low | Minor |
2,596,550,810 | vscode | SCM - move action buttons on inline window to the left | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
There are inlinine windows for git actions, whetever there is new change in the codebase the aeria on right side form the line numbers can be clicked and inline window will be opened. 99% of the time, my desired action is to revert changes by clicking button which is on the right side of the inline window, and if someone has wide monitor that can be inconvinient. I am proposing to move those buttons to the left side, closer to line numbers.
If that is not feasable could we have context menu with right click on the green lines, that indicates git changes, and there we can have revert changes action?

I would also propose for Peek References and other actions that require inline window, that the view is inverted and moved to the left, such as button and list of references/implementations etc, as it shortens the mouse drag and increase convinience, however on the other side I can imagine that people are used to this view, but hey, if something will improve the dev flow I am for it.

| ux,scm,under-discussion | low | Minor |
2,596,674,570 | go | crypto/x509: ParseCertificate fails with "net/url: invalid userinfo" | ### Go version
go version go1.18.1 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/liu/.cache/go-build"
GOENV="/home/liu/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/liu/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/liu/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/lib/go-1.18"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go-1.18/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.18.1"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/dev/null"
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build3649475886=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
Use cert, err := x509.ParseCertificate(derBytes) to process the certificate
### What did you see happen?
Error message:cannot parse URI "https://1kYj\\[@.cfZGv3T_Tr.D?/zrm3/4WA/Ir}BQ/yR]/0[g?<tX=uR?&K'O={d2}&sG?rLi=<}e>": parse "https://1kYj\\[@.cfZGv3T_Tr.D?/zrm3/4WA/Ir}BQ/yR]/0[g?<tX=uR?&K'O={d2}&sG?rLi=<}e>": net/url: invalid userinfo
### What did you expect to see?
The results are different from Openssl and Gnutls. Openssl's openssl x509 -noout -text -in filename and gnutls's certtool -i --infile=filename --inraw successfully viewed the certificate.Both successfully resolved SAN


| NeedsInvestigation | low | Critical |
2,596,724,442 | godot | Tearing when using UV to access an array of sampler2d's | ### Tested versions
Reproducible in: 4.3stable and 4.4dev3
### System information
Godot v4.3.stable - Windows 10.0.26100 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (Advanced Micro Devices, Inc.; 32.0.12011.1036) - 12th Gen Intel(R) Core(TM) i3-12100F (8 Threads)
### Issue description
I wrote a short shader to have a mesh sample one image when UV.x < 0.5, and another image otherwise.
Here it is:
```
shader_type spatial;
uniform sampler2D[2] images : source_color;
void fragment() {
ALBEDO = texture(images[int(step(0.5, UV.x))], UV).rgb;
}
```
But when you put it on a mesh and fill the array with two different images it looks something like this:

I expected a clean line in the middle separating the two images, like this:

The code for the above:
```
shader_type spatial;
uniform sampler2D[2] images : source_color;
vec3 get_color(int index, vec2 uv) {
if(index == 1) {
return texture(images[1], uv).rgb;
}
return texture(images[0], uv).rgb;
}
void fragment() {
ALBEDO = get_color(int(step(0.5, UV.x)), UV);
}
```
### Steps to reproduce
After you open the MRP, the issue should be visible immediately.
### Minimal reproduction project (MRP)
[MRP_bug.zip](https://github.com/user-attachments/files/17430209/MRP_bug.zip) | discussion,documentation,topic:shaders | low | Critical |
2,596,747,356 | go | x/tools/go/packages: missing TypesInfo when NeedTypesInfo was set while NeedSyntax & NeedTypes were not | ### Go version
go1.22.7 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/xrxtzz/Library/Caches/go-build'
GOENV='/Users/xrxtzz/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/xrxtzz/go/pkg/mod'
GONOPROXY='`*.everphoto.cn,git.smartisan.com'
GONOSUMDB='`*.everphoto.cn,git.smartisan.com'
GOOS='darwin'
GOPATH='/Users/xrxtzz/go'
GOPRIVATE='`*.everphoto.cn,git.smartisan.com'
GOPROXY='https://goproxy.cn,direct'
GOROOT='/opt/homebrew/Cellar/go@1.22/1.22.7/libexec'
GOSUMDB='sum.golang.google.cn'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go@1.22/1.22.7/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.7'
GCCGO='gccgo'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/xrxtzz/Documents/code/code_analysis/irfronts/go/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/lk/nhh_2w2n2sv2kdzk75mfbrcr0000gp/T/go-build3609621157=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
The `NeedTypesInfo` option indicates that `packages.Load` would add `TypesInfo` field to result packages. But it turns out that when `NeedTypesInfo` was used separately from `NeedSyntax` or `NeedTypes`, the `TypesInfo` fields would simply be `nil`.
```go
package main
import (
"fmt"
"golang.org/x/tools/go/packages"
)
func main() {
packs, _ := loadProgramPackageWithTypesInfo("strings")
fmt.Println(packs[0].TypesInfo)
}
func loadProgramPackageWithTypesInfo(pattern string) ([]*packages.Package, error) {
cfg := packages.Config{
Mode: packages.NeedName | packages.NeedFiles | packages.NeedCompiledGoFiles |
packages.NeedImports | packages.NeedTypesInfo | packages.NeedTypesSizes,
}
return packages.Load(&cfg, pattern)
}
```
### What did you see happen?
When `NeedTypesInfo` was used separately from `NeedSyntax` or `NeedTypes`, in LoadMode of packages.Load, the `TypesInfo` fields would simply be `nil`.
### What did you expect to see?
The `TypesInfo` were produced properly. | NeedsDecision,Tools | low | Critical |
2,596,790,803 | react | [DevTools Bug]: Phantom re-renders on sibling <label> components | ### Website or app
https://stackblitz.com/edit/react-devtools-bug?file=src%2FApp.jsx
### Repro steps
1. Start profiler
2. Input into "Component with state"
3. Both of "ComponentWithState" and "AnotherReactComponent" were re-rendered will be shown in the report but why?

### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | low | Critical |
2,596,804,719 | tauri | [bug] Segfault on MacOS after window open | ### Describe the bug
Segfault occurs shortly after a window is opened (MacOS-only) while the Tauri backend is otherwise idle.
Interestingly it never occurs when a window is first opened, but is consistently reproducible by repeatedly opening and closing windows.
### Reproduction
1. Clone https://github.com/glzr-io/zebar and checkout `tauri-segfault-repro`
2. `pnpm i && pnpm dev`
3. Select `Open settings` from the app's systray icon
4. In the settings window, select `vanilla.zebar.json` in the sidebar
5. Spam click any of the form inputs (e.g. `Resizable`). This causes the window to relaunch with the new settings, and will consistently segfault after 1-50 clicks of the form input.
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.5.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.80.0-nightly (debd22da6 2024-05-29)
✔ cargo: 1.80.0-nightly (431db31d0 2024-05-28)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: nightly-aarch64-apple-darwin (overridden by '/Users/larsberger/projects/zebar-segfault/rust-toolchain.toml')
- node: 20.15.0
- pnpm: 9.4.0
- npm: 10.7.0
[-] Packages
- tauri 🦀: 2.0.4
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.46.2
- tao 🦀: 0.30.3
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.0.3
[-] Plugins
- tauri-plugin-single-instance 🦀: 2.0.1
- @tauri-apps/plugin-single-instance : not installed!
- tauri-plugin-dialog 🦀: 2.0.1
- @tauri-apps/plugin-dialog : not installed!
- tauri-plugin-fs 🦀: 2.0.1
- @tauri-apps/plugin-fs : not installed!
- tauri-plugin-http 🦀: 2.0.1
- @tauri-apps/plugin-http : not installed!
- tauri-plugin-shell 🦀: 2.0.1
- @tauri-apps/plugin-shell : not installed!
[-] App
- build-type: bundle
- CSP: connect-src 'self' ipc: http://ipc.localhost ws://localhost:6123; font-src 'self' *; img-src 'self' asset: http://asset.localhost blob: data: *; default-src 'self'; style-src 'self' 'unsafe-inline' *; script-src 'self' 'unsafe-eval' asset: http://asset.localhost
- frontendDist: ../settings-ui/dist
- devUrl: http://localhost:4200/
```
### Stack trace
_No response_
### Additional context
Totally unfamiliar with segfault debugging and core dumps, so listing the steps I followed to retrieve the core dump below:
```sh
# Allow core dumps.
sudo chmod 1777 /cores
ulimit -S -c unlimited
# Codesign the debug build with dummy entitlements.
/usr/libexec/PlistBuddy -c "Add :com.apple.security.get-task-allow bool true" tmp.entitlements
codesign -s - -f --entitlements tmp.entitlements ./target/debug/zebar
# Launch debug build and follow reproduction steps till segfault.
./target/debug/zebar
# Create and open the core dump file.
ls /cores # Outputs e.g. core.28863
lldb -c /cores/<CORE_FILE> ./target/debug/zebar # Substitute with correct path to core dump file
thread select 1
thread backtrace
```
**Core dump:**
```
* thread #1, stop reason = ESR_EC_DABORT_EL0 (fault address: 0x10)
* frame #0: 0x00000001834a9c20 libobjc.A.dylib`objc_msgSend + 32
frame #1: 0x0000000103babe54 zebar`_$LT$$LP$A$C$$RP$$u20$as$u20$objc..message..MessageArguments$GT$::invoke::hf08f49b8635292e0(imp=(libobjc.A.dylib`objc_msgSend), obj=0x0000000118f3b330, sel=Sel @ 0x000000016d7e5c80, (null)=(cocoa_foundation::foundation::macos::NSSize) @ 0x000000016d7e5ca8) at mod.rs:128:17
frame #2: 0x0000000103ba9994 zebar`objc::message::platform::send_unverified::h2de68543a6dede83(obj=0x0000000118f3b330, sel=Sel @ 0x000000016d7e5d50, args=(cocoa_foundation::foundation::macos::NSSize) @ 0x000000016d7e5d70) at mod.rs:27:9
frame #3: 0x0000000103ba269c zebar`_$LT$$BP$mut$u20$objc..runtime..Object$u20$as$u20$cocoa..appkit..NSWindow$GT$::setContentSize_::hd2979b57bb12fdd1 [inlined] objc::message::send_message::hefa2a452a00a0273(obj=0x0000000118f3b330, sel=Sel @ 0x000000016d7e5e70, args=(cocoa_foundation::foundation::macos::NSSize) @ 0x000000016d7e5e88) at mod.rs:178:5
frame #4: 0x0000000103ba2680 zebar`_$LT$$BP$mut$u20$objc..runtime..Object$u20$as$u20$cocoa..appkit..NSWindow$GT$::setContentSize_::hd2979b57bb12fdd1(self=0x0000000118f3b330, contentSize=(width = 1470, height = 40)) at appkit.rs:1701:9
frame #5: 0x0000000103b9dc98 zebar`tao::platform_impl::platform::util::async::set_content_size_async::_$u7b$$u7b$closure$u7d$$u7d$::h892d131deff1a61f at async.rs:89:5
frame #6: 0x0000000103b8c688 zebar`dispatch::context_and_function::work_execute_closure::hb40f60a1f4b80522(context=0x000000011d637e00) at lib.rs:94:9
frame #7: 0x00000001836ce3e8 libdispatch.dylib`_dispatch_client_callout + 20
frame #8: 0x00000001836dcbb8 libdispatch.dylib`_dispatch_main_queue_drain + 988
frame #9: 0x00000001836dc7cc libdispatch.dylib`_dispatch_main_queue_callback_4CF + 44
frame #10: 0x000000018399fad4 CoreFoundation`__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
frame #11: 0x000000018395d258 CoreFoundation`__CFRunLoopRun + 1996
frame #12: 0x000000018395c434 CoreFoundation`CFRunLoopRunSpecific + 608
frame #13: 0x000000018e10019c HIToolbox`RunCurrentEventLoopInMode + 292
frame #14: 0x000000018e0fffd8 HIToolbox`ReceiveNextEventCommon + 648
frame #15: 0x000000018e0ffd30 HIToolbox`_BlockUntilNextEventMatchingListInModeWithFilter + 76
frame #16: 0x00000001871bbd68 AppKit`_DPSNextEvent + 660
frame #17: 0x00000001879b1808 AppKit`-[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 700
frame #18: 0x00000001871af09c AppKit`-[NSApplication run] + 476
frame #19: 0x000000010383893c zebar`_$LT$$LP$$RP$$u20$as$u20$objc..message..MessageArguments$GT$::invoke::h10e4b9efb35ed797(imp=(libobjc.A.dylib`objc_msgSend), obj=0x000000011f0a47e0, sel=Sel @ 0x000000016d7e7490, (null)=<unavailable>) at mod.rs:128:17
frame #20: 0x0000000103838728 zebar`objc::message::platform::send_unverified::h4d152243f0997ffd(obj=0x000000011f0a47e0, sel=Sel @ 0x000000016d7e74f8, args=<unavailable>) at mod.rs:27:9
frame #21: 0x0000000102a9d21c zebar`tao::platform_impl::platform::event_loop::EventLoop$LT$T$GT$::run_return::h1671b5fd34c9c2fb [inlined] objc::message::send_message::h0d61d3bca8d7112e(obj=0x000000011f0a47e0, sel=Sel @ 0x000000016d7e79e8, args=<unavailable>) at mod.rs:178:5
frame #22: 0x0000000102a9d200 zebar`tao::platform_impl::platform::event_loop::EventLoop$LT$T$GT$::run_return::h1671b5fd34c9c2fb(self=0x000000016d7e7aa8, callback=<unavailable>) at event_loop.rs:225:16
frame #23: 0x0000000102a9de74 zebar`tao::platform_impl::platform::event_loop::EventLoop$LT$T$GT$::run::h9755136ec8d9fa6d(self=<unavailable>, callback=<unavailable>) at event_loop.rs:192:21
frame #24: 0x0000000102a9be30 zebar`tao::event_loop::EventLoop$LT$T$GT$::run::hbd5cea2a03a3370f(self=<unavailable>, event_handler={closure_env#0}<tauri::EventLoopMessage, tauri::app::{impl#16}::run::{closure_env#0}<tauri_runtime_wry::Wry<tauri::EventLoopMessage>, zebar::main::{async_block#0}::{closure_env#1}>> @ 0x000000016d7e7b80) at event_loop.rs:215:5
frame #25: 0x0000000103020c10 zebar`_$LT$tauri_runtime_wry..Wry$LT$T$GT$$u20$as$u20$tauri_runtime..Runtime$LT$T$GT$$GT$::run::h710880aafcd48e13(self=Wry<tauri::EventLoopMessage> @ 0x000000016d7e7e88, callback={closure_env#0}<tauri_runtime_wry::Wry<tauri::EventLoopMessage>, zebar::main::{async_block#0}::{closure_env#1}> @ 0x000000016d7e7fd8) at lib.rs:2726:5
frame #26: 0x0000000103044aa4 zebar`tauri::app::App$LT$R$GT$::run::h2dc9714d4d498b8a(self=App<tauri_runtime_wry::Wry<tauri::EventLoopMessage>> @ 0x000000016d7ea7e8, callback={closure_env#1} @ 0x000000016d7e81bf) at app.rs:1129:5
frame #27: 0x0000000102669de4 zebar`zebar::main::_$u7b$$u7b$closure$u7d$$u7d$::h82077487fcc0818f((null)=(__pointer = 0x000000016d7eaa97)) at main.rs:87:3
frame #28: 0x0000000102b47708 zebar`tokio::runtime::park::CachedParkThread::block_on::_$u7b$$u7b$closure$u7d$$u7d$::hd09bfdd1c46f781b at park.rs:281:63
frame #29: 0x0000000102b443fc zebar`tokio::runtime::park::CachedParkThread::block_on::h8bd4c04cf4b4d3ad at coop.rs:107:5
frame #30: 0x0000000102b44380 zebar`tokio::runtime::park::CachedParkThread::block_on::h8bd4c04cf4b4d3ad [inlined] tokio::runtime::coop::budget::h365d9c72ec300159(f={closure_env#0}<zebar::main::{async_block_env#0}> @ 0x000000016d7eaaf8) at coop.rs:73:5
frame #31: 0x0000000102b44324 zebar`tokio::runtime::park::CachedParkThread::block_on::h8bd4c04cf4b4d3ad(self=0x000000016d7eab7d, f={async_block_env#0} @ 0x000000016d7eaa3e) at park.rs:281:31
frame #32: 0x0000000102795150 zebar`tokio::runtime::context::blocking::BlockingRegionGuard::block_on::h8e1bffcef7cc8c0a(self=0x000000016d7eac50, f={async_block_env#0} @ 0x000000016d7eab7b) at blocking.rs:66:9
frame #33: 0x0000000102ef1e20 zebar`tokio::runtime::scheduler::multi_thread::MultiThread::block_on::_$u7b$$u7b$closure$u7d$$u7d$::h80938cda97832998(blocking=0x000000016d7eac50) at mod.rs:87:13
frame #34: 0x000000010279acc8 zebar`tokio::runtime::context::runtime::enter_runtime::h3ed252a74a355fe7(handle=0x000000016d7eae88, allow_block_in_place=true, f=(future = zebar::main::{async_block_env#0} @ 0x000000016d7eac0e)) at runtime.rs:65:16
frame #35: 0x0000000102ef1b94 zebar`tokio::runtime::scheduler::multi_thread::MultiThread::block_on::h540db91bfd0d32fc(self=0x000000016d7eae60, handle=0x000000016d7eae88, future={async_block_env#0} @ 0x000000016d7eacc5) at mod.rs:86:9
frame #36: 0x0000000102da76cc zebar`tokio::runtime::runtime::Runtime::block_on_inner::hbf2443700130c674(self=0x000000016d7eae58, future={async_block_env#0} @ 0x000000016d7ead1e) at runtime.rs:363:45
frame #37: 0x0000000102da7ba8 zebar`tokio::runtime::runtime::Runtime::block_on::haf9efc118fc4ab48(self=0x000000016d7eae58, future={async_block_env#0} @ 0x000000016d7eadce) at runtime.rs:335:13
frame #38: 0x0000000102afdc0c zebar`zebar::main::h672da972f2a23989 at main.rs:96:3
frame #39: 0x0000000102f1abf0 zebar`core::ops::function::FnOnce::call_once::h5ea35fb2b20ba38d((null)=(zebar`zebar::main::h672da972f2a23989 at main.rs:38), (null)=<unavailable>) at function.rs:250:5
frame #40: 0x00000001030d64d0 zebar`std::sys_common::backtrace::__rust_begin_short_backtrace::h4273634153d9d089(f=(zebar`zebar::main::h672da972f2a23989 at main.rs:38)) at backtrace.rs:155:18
frame #41: 0x0000000102c0c4bc zebar`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h73bf01131ce8a311 at rt.rs:159:18
frame #42: 0x00000001041f9674 zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] core::ops::function::impls::_$LT$impl$u20$core..ops..function..FnOnce$LT$A$GT$$u20$for$u20$$RF$F$GT$::call_once::h15342bf524497bb6 at function.rs:284:13 [opt]
frame #43: 0x00000001041f966c zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::panicking::try::do_call::h816e9d7eafbe96d5 at panicking.rs:559:40 [opt]
frame #44: 0x00000001041f966c zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::panicking::try::he4034598ffb399ab at panicking.rs:523:19 [opt]
frame #45: 0x00000001041f966c zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::panic::catch_unwind::hb0f852366e4ee9d8 at panic.rs:149:14 [opt]
frame #46: 0x00000001041f966c zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::hcbe8b19d4b6f7f13 at rt.rs:141:48 [opt]
frame #47: 0x00000001041f966c zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::panicking::try::do_call::h0ff9630736d9b7f8 at panicking.rs:559:40 [opt]
frame #48: 0x00000001041f9668 zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::panicking::try::h245d37a65ae1af4e at panicking.rs:523:19 [opt]
frame #49: 0x00000001041f9668 zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c [inlined] std::panic::catch_unwind::h26ddaa60696e0e16 at panic.rs:149:14 [opt]
frame #50: 0x00000001041f9668 zebar`std::rt::lang_start_internal::h53a33f07dfc5ec3c at rt.rs:141:20 [opt]
frame #51: 0x0000000102c0c488 zebar`std::rt::lang_start::hc039a6dd87c13b53(main=(zebar`zebar::main::h672da972f2a23989 at main.rs:38), argc=1, argv=0x000000016d7eb400, sigpipe='\0') at rt.rs:158:17
frame #52: 0x0000000102afdca4 zebar`main + 36
frame #53: 0x00000001834f60e0 dyld`start + 2360
``` | type: bug,platform: macOS,status: needs triage | low | Critical |
2,596,837,853 | angular | Odd page transition when smooth scrolling is enabled | ### Which @angular/* package(s) are the source of the bug?
router
### Is this a regression?
No
### Description
Projects will often use a [CSS attribute to enable smooth scrolling](https://developer.mozilla.org/en-US/docs/Web/CSS/scroll-behavior). An example in normalize.css:
```
:root {
@media (prefers-reduced-motion: no-preference) {
scroll-behavior: smooth;
}
}
```
When you add this to an Angular project combined with the `withInMemoryScrolling` feature enabled, users may experience a weird page transition. After navigation and the new page is renderered, the page slowly scrolls up to the top. This gets more annoying when the scroll distance gets higher.
When a user navigates to another page, he/she expects to see the top of the new page without a scroll animation when arriving at the page. Therefor my proposal is to improve the router scroller to avoid having a smooth scroll animation during navigation.
The native [window.scrollTo](https://developer.mozilla.org/en-US/docs/Web/API/Window/scrollTo) method supports a "behavior" option in which JS can override the behavior defined by CSS.
The changes required (imo) are:
- Improve the @angular/common [BrowserViewPortScroller](https://github.com/angular/angular/blob/main/packages/common/src/viewport_scroller.ts) to support the native `behavior` option.
- Change the @angular/router [RouterScroller](https://github.com/angular/angular/blob/main/packages/router/src/router_scroller.ts#L26) to pass the "instant" behavior when the user navigates to a different page.
Final remarks:
- If navigation occurs between two states on the same page (ie. pagination), one could argue that the smooth scrolling is desirable. Not sure if this is possible, but RouterScroller should only set "behavior: instant" only when the new component is different than the previous component.
- This change could be opt-in for backwards compatibility, so configuration options may be added to the inMemoryScrolling provider.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-hnp8mh
### Please provide the exception or error you saw
- Scroll down the page
- Click on "Details" button in any of the items in the list
You should now see the detail page appear instantly with a smooth transition scrolling to the top
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.1.2
Node: 18.20.2
Package Manager: yarn 4.1.0
OS: darwin arm64
Angular: 18.1.2
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, language-service, localize, platform-browser
... platform-browser-dynamic, platform-server, router, ssr
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1801.2
@angular-devkit/build-angular 18.1.2
@angular-devkit/core 18.1.2
@angular-devkit/schematics 18.1.2
@schematics/angular 18.1.2
ng-packagr 18.1.0
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.4
### Anything else?
I can make a PR for this, just let me know if the given proposal is acceptable... | area: router | low | Critical |
2,596,857,114 | transformers | GGUF support for BERT architecture | ### Feature request
I want to add the ability to use GGUF BERT models in transformers.
Currently the library does not support this architecture. When I try to load it, I get an error TypeError: Architecture 'bert' is not supported.
I have done most of the mapping, with some fields I am having difficulty.
Can anybody help me and provide comments on this feature?
### Motivation
I ran into a problem that I can't use gguf models in RASA(rasa uses standard from_pretrained). So I decided to make BERT support
### Your contribution
That's my extended ggml.py file
```python
GGUF_TENSOR_MAPPING = {
"bert": {
"context_length": "max_position_embeddings",
"block_count": "num_hidden_layers",
"feed_forward_length": "intermediate_size",
"embedding_length": "hidden_size",
"attention.head_cgguf>=0.10.0ount": "num_attention_heads",
"attention.layer_norm_rms_epsilon": "rms_norm_eps",
# "attention.causal": "",
# "pooling_type": "",
"vocab_size": "vocab_size",
}
}
GGUF_CONFIG_MAPPING = {
"bert": {
"context_length": "max_position_embeddings",
"block_count": "num_hidden_layers",
"feed_forward_length": "intermediate_size",
"embedding_length": "hidden_size",
"attention.head_cgguf>=0.10.0ount": "num_attention_heads",
"attention.layer_norm_rms_epsilon": "rms_norm_eps",
# "attention.causal": "",
# "pooling_type": "",
"vocab_size": "vocab_size",
}
}
GGUF_TOKENIZER_MAPPING = {
"tokenizer": {
# "ggml.token_type_count": "",
# "ggml.pre": "",
"ggml.model": "tokenizer_type",
"ggml.tokens": "all_special_tokens",
"ggml.token_type": "all_special_ids",
"ggml.unknown_token_id": "unk_token_id",
"ggml.seperator_token_id": "sep_token_id",
"ggml.padding_token_id": "pad_token_id",
"ggml.cls_token_id": "cls_token_id",
"ggml.mask_token_id": "mask_token_id",
},
"tokenizer_config": {
"ggml.unknown_token_id": "unk_token_id",
"ggml.seperator_token_id": "sep_token_id",
"ggml.padding_token_id": "pad_token_id",
"ggml.cls_token_id": "cls_token_id",
"ggml.mask_token_id": "mask_token_id",
},
}
``` | Feature request | low | Critical |
2,596,879,929 | go | proposal: x/text/encoding: handling encoding errors by replacing visually similar unicode characters in ShiftJIS encoding | ### Proposal Details
# Summary
When encoding Unicode strings to Shift JIS in Go, certain visually similar characters cannot be directly represented in Shift JIS, leading to encoding errors. This causes confusion because the characters appear similar but result in errors during encoding. This proposal suggests introducing a normalization step that replaces these problematic characters with their Shift JIS-compatible equivalents before encoding. We accept that this transformation is one-way and that the original characters cannot be restored, which is acceptable for our use case.
# Background
Shift JIS is a character encoding for the Japanese language but does not support all Unicode characters. Some visually similar characters have different code points and cannot be encoded in Shift JIS, causing encoding errors and confusion.
# Examples:
The Unicode character "〜" (U+301C) looks similar to "~" (U+FF5E).
The Unicode character "−" (U+2212) resembles the standard hyphen "-" (U+002D).
These visually similar characters are often used interchangeably in text but may cause encoding errors when converting to Shift JIS. In our application, it is acceptable that the transformation is not reversible; we prioritize successful encoding over the ability to revert to the original characters.
# Proposal
Introduce a normalization function that replaces visually similar Unicode characters, which cannot be encoded in Shift JIS, with their equivalent characters that can be encoded. This function can be integrated into the encoding process or provided as a utility in the golang.org/x/text/encoding/japanese package.
https://go.dev/play/p/OtEWoZmxDzb
```
package main
import (
"fmt"
"golang.org/x/text/encoding/japanese"
"golang.org/x/text/transform"
)
func main() {
replacements := map[string]string{
"〜": "~", // U+301C (Wave Dash) → U+FF5E (Fullwidth Tilde)
"−": "-", // U+2212 (Minus Sign) → U+002D (Hyphen-Minus)
"—": "-", // U+2014 (Em Dash) → U+002D (Hyphen-Minus)
"•": "*", // U+2022 (Bullet) → U+002A (Asterisk)
}
encoder := japanese.ShiftJIS.NewEncoder()
for orig, replacement := range replacements {
// Check if the original character can be encoded
_, _, errOrig := transform.String(encoder, orig)
// Check if the replacement character can be encoded
_, _, errReplacement := transform.String(encoder, replacement)
if errOrig == nil {
fmt.Printf("Mapping may be unnecessary: Original character %q can be encoded.\n", orig)
} else {
fmt.Printf("Mapping necessary: Original character %q cannot be encoded: %v\n", orig, errOrig)
}
if errReplacement != nil {
fmt.Printf("Warning: Replacement character %q cannot be encoded: %v\n", replacement, errReplacement)
}
}
}
```
Output
```
Mapping necessary: Original character "•" cannot be encoded: encoding: rune not supported by encoding.
Mapping necessary: Original character "〜" cannot be encoded: encoding: rune not supported by encoding.
Mapping necessary: Original character "−" cannot be encoded: encoding: rune not supported by encoding.
Mapping necessary: Original character "—" cannot be encoded: encoding: rune not supported by encoding.
```
| Proposal | low | Critical |
2,596,895,366 | neovim | vim.version.parse: more granular "strict" options | ### Problem
I would like to be able to parse versions, such as `1.0.0`, `v1.0.0`, `1.0`, `v1`, which is possible with the default `strict = false` behaviour, but not with `strict = true`.
But I would like it to fail on something that is clearly not a version, such as a git commit sha.
Currently, this is the behaviour:
```
:lua =vim.version.parse("d818fd0624205b34e14888358037fb6f5dc51234")
{
major = 818,
minor = 0,
patch = 0,
<metatable> = {
__eq = <function 1>,
__index = <function 2>,
__le = <function 3>,
__lt = <function 4>,
__newindex = <function 5>,
__tostring = <function 6>
}
}
```
### Expected behavior
I'm not sure if there's a good solution to this, but perhaps some options that allow you to fine-tune the strictness would work.
For example,
```lua
---@class vim.version.parse.Opts
---
--- No coercion attempt if `true`...
---@field strict? boolean | vim.version.parse.StrictnessConfig
---@class vim.version.parse.StrictnessConfig
---
---Allow prefixes, such as `v` or `tmux-`.
---@field prefix? boolean | string[]
---
---
---Allow suffixes, such as `-rc1`.
---@field suffix? boolean | string[]
---
---1 allows `1`, 2 allows `1.0`, ...
---@field min_component_count? integer
---
---@field max_component_count? integer
``` | enhancement,lua | low | Minor |
2,596,928,862 | deno | deno cli helper feature for providing least privilege access to scripts | When I run a script using deno in the command line for the first time, I get useful prompts to allow or deny access to environment, os, or the network.
After I run the command successfully first time, I would like deno to provide me with the exact command that has all the privileges I have given with a least-privilege approach to run next time.
For example:
1. Run first time and interactively give permissions

2. At the end, deno would suggest the least-privilege command to run the next time. e.g:
Run this using the following command next time:
`deno --allow-env --allow-read=~/.aws/config,~/.aws/credentials --allow-net=example.com:80 run npm:aws-cdk synthesize`
| permissions,suggestion | low | Minor |
2,596,996,751 | pytorch | TeLU activation function | ### 🚀 The feature, motivation and pitch
I want to develop and add TeLU activation function to the PyTorch. Please go through the following paper on why TeLU is useful and potentially useful compared to GeLU.
https://arxiv.org/pdf/2402.02790
https://ieeexplore.ieee.org/document/9301084
### Alternatives
It is a new potential activation function addition
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Minor |
2,597,044,334 | rust | Failed to build `jemallocator` on `arm64e-apple-darwin` | Failed to build on arm64e-apple-darwin
<details>
<summary>Errors</summary>
```
warning: jemalloc-sys@0.5.4+5.3.0-patched: "Unprefixed `malloc` requested on unsupported platform `arm64e-apple-darwin` => using prefixed `malloc`"
error: failed to run custom build command for `jemalloc-sys v0.5.4+5.3.0-patched`
Caused by:
process didn't exit successfully: `/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/release/build/jemalloc-sys-0513b27edcf6bb21/build-script-build` (exit status: 101)
--- stdout
TARGET=arm64e-apple-darwin
HOST=aarch64-apple-darwin
NUM_JOBS=3
OUT_DIR="/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out"
BUILD_DIR="/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out/build"
SRC_DIR="/Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/jemalloc-sys-0.5.4+5.3.0-patched"
cargo:warning="Unprefixed `malloc` requested on unsupported platform `arm64e-apple-darwin` => using prefixed `malloc`"
cargo:rustc-cfg=prefixed
cargo:rerun-if-env-changed=JEMALLOC_OVERRIDE
OPT_LEVEL = Some(3)
TARGET = Some(arm64e-apple-darwin)
OUT_DIR = Some(/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out)
HOST = Some(aarch64-apple-darwin)
cargo:rerun-if-env-changed=CC_arm64e-apple-darwin
CC_arm64e-apple-darwin = None
cargo:rerun-if-env-changed=CC_arm64e_apple_darwin
CC_arm64e_apple_darwin = Some(sccache cc)
cargo:rerun-if-env-changed=CC_KNOWN_WRAPPER_CUSTOM
CC_KNOWN_WRAPPER_CUSTOM = None
cargo:rerun-if-env-changed=CC_ENABLE_DEBUG_OUTPUT
cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
CRATE_CC_NO_DEFAULTS = None
DEBUG = Some(false)
cargo:rerun-if-env-changed=MACOSX_DEPLOYMENT_TARGET
MACOSX_DEPLOYMENT_TARGET = Some(11)
cargo:rerun-if-env-changed=CFLAGS_arm64e-apple-darwin
CFLAGS_arm64e-apple-darwin = None
cargo:rerun-if-env-changed=CFLAGS_arm64e_apple_darwin
CFLAGS_arm64e_apple_darwin = Some(-ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11)
cargo:rerun-if-env-changed=CC_SHELL_ESCAPED_FLAGS
CC_SHELL_ESCAPED_FLAGS = None
CC="cc"
CFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11"
JEMALLOC_REPO_DIR="jemalloc"
cargo:rerun-if-env-changed=JEMALLOC_SYS_WITH_MALLOC_CONF
cargo:rerun-if-env-changed=JEMALLOC_SYS_WITH_LG_PAGE
cargo:rerun-if-env-changed=JEMALLOC_SYS_WITH_LG_HUGEPAGE
cargo:rerun-if-env-changed=JEMALLOC_SYS_WITH_LG_QUANTUM
cargo:rerun-if-env-changed=JEMALLOC_SYS_WITH_LG_VADDR
--with-jemalloc-prefix=_rjem_
running: cd "/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out/build" && CC="cc" CFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11" CPPFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11" LDFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11" "sh" "/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out/build/configure" "--disable-cxx" "--enable-doc=no" "--enable-shared=no" "--with-jemalloc-prefix=_rjem_" "--with-private-namespace=_rjem_" "--host=arm64e-apple-darwin" "--build=aarch64-apple-darwin" "--prefix=/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out"
checking for xsltproc... /usr/bin/xsltproc
checking for arm64e-apple-darwin-gcc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking whether compiler is cray... no
checking whether compiler supports -std=gnu11... yes
checking whether compiler supports -Werror=unknown-warning-option... yes
checking whether compiler supports -Wall... yes
checking whether compiler supports -Wextra... yes
checking whether compiler supports -Wshorten-64-to-32... yes
checking whether compiler supports -Wsign-compare... yes
checking whether compiler supports -Wundef... yes
checking whether compiler supports -Wno-format-zero-length... yes
checking whether compiler supports -Wpointer-arith... yes
checking whether compiler supports -Wno-missing-braces... yes
checking whether compiler supports -Wno-missing-field-initializers... yes
checking whether compiler supports -Wno-missing-attributes... no
checking whether compiler supports -pipe... yes
checking whether compiler supports -g3... yes
checking how to run the C preprocessor... cc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking whether byte ordering is bigendian... no
checking size of void *... 8
checking size of int... 4
checking size of long... 8
checking size of long long... 8
checking size of intmax_t... 8
checking build system type... aarch64-apple-darwin
checking host system type... running: "tail" "-n" "100" "/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out/build/config.log"
dvidir='${docdir}'
enable_autogen=''
enable_cache_oblivious=''
enable_cxx='0'
enable_debug=''
enable_doc='no'
enable_experimental_smallocx=''
enable_fill=''
enable_initial_exec_tls=''
enable_lazy_lock=''
enable_log=''
enable_opt_safety_checks=''
enable_opt_size_checks=''
enable_prof=''
enable_readlinkat=''
enable_shared='no'
enable_static=''
enable_stats=''
enable_tls=''
enable_uaf_detection=''
enable_utrace=''
enable_xmalloc=''
enable_zone_allocator=''
exe=''
exec_prefix='/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out'
host='arm64e-apple-darwin'
host_alias='arm64e-apple-darwin'
host_cpu=''
host_os=''
host_vendor=''
htmldir='${docdir}'
importlib=''
includedir='${prefix}/include'
infodir='${datarootdir}/info'
install_suffix=''
je_=''
jemalloc_version=''
jemalloc_version_bugfix=''
jemalloc_version_gid=''
jemalloc_version_major=''
jemalloc_version_minor=''
jemalloc_version_nrev=''
libdir='${exec_prefix}/lib'
libdl=''
libexecdir='${exec_prefix}/libexec'
libprefix=''
link_whole_archive=''
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
mandir='${datarootdir}/man'
o=''
objroot=''
oldincludedir='/usr/include'
pdfdir='${docdir}'
prefix='/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out'
private_namespace=''
program_transform_name='s,x,x,'
psdir='${docdir}'
rev='2'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
so=''
srcroot=''
sysconfdir='${prefix}/etc'
target_alias=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define PACKAGE_NAME ""
#define PACKAGE_TARNAME ""
#define PACKAGE_VERSION ""
#define PACKAGE_STRING ""
#define PACKAGE_BUGREPORT ""
#define PACKAGE_URL ""
#define JEMALLOC_HAS_RESTRICT
#define STDC_HEADERS 1
#define HAVE_SYS_TYPES_H 1
#define HAVE_SYS_STAT_H 1
#define HAVE_STDLIB_H 1
#define HAVE_STRING_H 1
#define HAVE_MEMORY_H 1
#define HAVE_STRINGS_H 1
#define HAVE_INTTYPES_H 1
#define HAVE_STDINT_H 1
#define HAVE_UNISTD_H 1
#define SIZEOF_VOID_P 8
#define LG_SIZEOF_PTR 3
#define SIZEOF_INT 4
#define LG_SIZEOF_INT 2
#define SIZEOF_LONG 8
#define LG_SIZEOF_LONG 3
#define SIZEOF_LONG_LONG 8
#define LG_SIZEOF_LONG_LONG 3
#define SIZEOF_INTMAX_T 8
#define LG_SIZEOF_INTMAX_T 3
configure: exit 1
--- stderr
Invalid configuration `arm64e-apple-darwin': machine `arm64e-apple' not recognized
configure: error: /bin/sh build-aux/config.sub arm64e-apple-darwin failed
thread 'main' panicked at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/jemalloc-sys-0.5.4+5.3.0-patched/build.rs:351:9:
command did not execute successfully: cd "/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out/build" && CC="cc" CFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11" CPPFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11" LDFLAGS="-O3 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11 -ffunction-sections -fdata-sections -fPIC --target=arm64e-apple-darwin -mmacosx-version-min=11" "sh" "/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out/build/configure" "--disable-cxx" "--enable-doc=no" "--enable-shared=no" "--with-jemalloc-prefix=_rjem_" "--with-private-namespace=_rjem_" "--host=arm64e-apple-darwin" "--build=aarch64-apple-darwin" "--prefix=/Users/runner/work/rust-compiler-builder/rust-compiler-builder/rust/build/aarch64-apple-darwin/stage1-rustc/arm64e-apple-darwin/release/build/jemalloc-sys-5fc1cef2b6f17384/out"
expected success, got: exit status: 1
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
```
</details>
See https://github.com/tikv/jemallocator/issues/102 | O-Arm,O-apple,C-external-bug | low | Critical |
2,597,159,746 | rust | ICE: Unsize coercion, but `&dyn Foo<'_>` isn't coercible to `&dyn Bar<'_, '_, ()>` | <!--
[31mICE[0m: Rustc ./a.rs '-Zvalidate-mir --crate-type=lib -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_mir_transform/src/validate.rs:95:25: 'broken MIR in Item(DefId(0:14 ~ a[431e]::test_correct3)) (after pass CheckPackedRef) at bb0[3]:'', 'thread 'rustc' panicked at compiler/rustc_mir_transform/src/validate.rs:95:25: 'broken MIR in Item(DefId(0:14 ~ a[431e]::test_correct3)) (after pass CheckPackedRef) at bb0[3]:''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
#![feature(trait_upcasting, type_alias_impl_trait)]
type Tait = impl Sized;
trait Foo<'a>: Bar<'a, 'a, Tait> {}
trait Bar<'a, 'b, T> {}
fn test_correct3<'a>(x: &dyn Foo<'a>, _: Tait) {
let _ = x as &dyn Bar<'_, '_, ()>;
}
````
original:
````rust
#![feature(trait_upcasting, type_alias_impl_trait)]
//@ check-pass
type Tait = impl Sized;
trait Foo<'a>: Bar<'a, 'a, Tait> {}
trait Bar<'a, 'b, T> {}
fn test_correct(x: &dyn Foo<'static>) {
let _ = x as &dyn Bar<'static, 'static, Tait>;
}
fn test_correct2<'a>(x: &dyn Foo<'a>) {
let _ = x as &dyn Bar<'_, '_, Tait>;
}
fn test_correct3<'a>(x: &dyn Foo<'a>, _: Tait) {
let _ = x as &dyn Bar<'_, '_, ()>;
}
````
Version information
````
rustc 1.84.0-nightly (db8043bb1 2024-10-18)
binary: rustc
commit-hash: db8043bb199705e72246ca43d4af1e9dbe7d55be
commit-date: 2024-10-18
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zvalidate-mir --crate-type=lib`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_mir_transform/src/validate.rs:95:25:
broken MIR in Item(DefId(0:11 ~ mvce[0ba5]::test_correct3)) (after pass CheckPackedRef) at bb0[3]:
Unsize coercion, but `&dyn Foo<'_>` isn't coercible to `&dyn Bar<'_, '_, ()>`
stack backtrace:
0: 0x78597e480c7a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h311c30006464a44d
1: 0x78597ec03e0a - core::fmt::write::hefa9af5f84579548
2: 0x78597ffa9291 - std::io::Write::write_fmt::h7b02b91c119616de
3: 0x78597e480ad2 - std::sys::backtrace::BacktraceLock::print::h1bf9540f585b1c0c
4: 0x78597e482fb6 - std::panicking::default_hook::{{closure}}::hdddf3b12f17177ae
5: 0x78597e482e00 - std::panicking::default_hook::h95cfcd1ff904ee1d
6: 0x78597d50280f - std[7ea95b089189ea8]::panicking::update_hook::<alloc[906de809b2d05e52]::boxed::Box<rustc_driver_impl[d25ecc56b0b66360]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x78597e4836c8 - std::panicking::rust_panic_with_hook::h4741b39194e01f78
8: 0x78597e48349a - std::panicking::begin_panic_handler::{{closure}}::h99fdffb7c066d2c5
9: 0x78597e481129 - std::sys::backtrace::__rust_end_short_backtrace::h2eb0b5ad24f0eada
10: 0x78597e48315c - rust_begin_unwind
11: 0x78597aee8c20 - core::panicking::panic_fmt::h9d32603254f24257
12: 0x78597b703213 - <rustc_mir_transform[7bdd74c785fb8b70]::validate::CfgChecker>::fail::<alloc[906de809b2d05e52]::string::String>
13: 0x78597b701615 - <rustc_mir_transform[7bdd74c785fb8b70]::validate::Validator as rustc_mir_transform[7bdd74c785fb8b70]::pass_manager::MirPass>::run_pass
14: 0x78597cde1a10 - rustc_mir_transform[7bdd74c785fb8b70]::pass_manager::validate_body
15: 0x78597ec06d29 - rustc_mir_transform[7bdd74c785fb8b70]::mir_built
16: 0x78597ec06907 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::mir_built::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>
17: 0x78597ef68d7a - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
18: 0x78597ef6888d - rustc_query_impl[32db4d22f0b27131]::query_impl::mir_built::get_query_non_incr::__rust_end_short_backtrace
19: 0x78597ec2c929 - rustc_mir_transform[7bdd74c785fb8b70]::ffi_unwind_calls::has_ffi_unwind_calls
20: 0x78597ec2c2d5 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::has_ffi_unwind_calls::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>
21: 0x78597efbd167 - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
22: 0x78597efbcd41 - rustc_query_impl[32db4d22f0b27131]::query_impl::has_ffi_unwind_calls::get_query_non_incr::__rust_end_short_backtrace
23: 0x78597c0a075f - rustc_mir_transform[7bdd74c785fb8b70]::mir_promoted
24: 0x78597f0e2292 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::mir_promoted::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 16usize]>>
25: 0x78597f0e255a - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
26: 0x78597fc2d9d0 - rustc_query_impl[32db4d22f0b27131]::query_impl::mir_promoted::get_query_non_incr::__rust_end_short_backtrace
27: 0x78597fc2dac3 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>
28: 0x78597ef68d7a - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
29: 0x78597ef687d3 - rustc_query_impl[32db4d22f0b27131]::query_impl::mir_borrowck::get_query_non_incr::__rust_end_short_backtrace
30: 0x78597f5c32f9 - rustc_middle[3555d1292876aeec]::query::plumbing::query_get_at::<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>>
31: 0x78597d6fc2d2 - <rustc_hir_analysis[3f1e84bf7d1c6f50]::collect::type_of::opaque::TaitConstraintLocator>::check
32: 0x78597d69a8b2 - <rustc_hir_analysis[3f1e84bf7d1c6f50]::collect::type_of::opaque::TaitConstraintLocator as rustc_hir[5cb3b32afa60e663]::intravisit::Visitor>::visit_nested_item
33: 0x78597fcca3b7 - rustc_hir_analysis[3f1e84bf7d1c6f50]::collect::type_of::type_of_opaque
34: 0x78597fcc9ce5 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::type_of_opaque::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>
35: 0x78597ec2972a - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::DefIdCache<rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
36: 0x78597fe49bb6 - rustc_query_impl[32db4d22f0b27131]::query_impl::type_of_opaque::get_query_non_incr::__rust_end_short_backtrace
37: 0x78597f3be502 - rustc_middle[3555d1292876aeec]::query::plumbing::query_get_at::<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::DefIdCache<rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>>
38: 0x78597bda2135 - rustc_hir_analysis[3f1e84bf7d1c6f50]::collect::type_of::type_of
39: 0x78597ec2aa30 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::type_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>
40: 0x78597ec2972a - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::DefIdCache<rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
41: 0x78597ec292e3 - rustc_query_impl[32db4d22f0b27131]::query_impl::type_of::get_query_non_incr::__rust_end_short_backtrace
42: 0x78597f3be502 - rustc_middle[3555d1292876aeec]::query::plumbing::query_get_at::<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::DefIdCache<rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 8usize]>>>
43: 0x78597fcd57d9 - rustc_hir_analysis[3f1e84bf7d1c6f50]::check::check::check_item_type
44: 0x78597bba12ed - rustc_hir_analysis[3f1e84bf7d1c6f50]::check::wfcheck::check_well_formed
45: 0x78597efbdac7 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>
46: 0x78597efbd21f - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
47: 0x78597efbce90 - rustc_query_impl[32db4d22f0b27131]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
48: 0x78597efbdb47 - rustc_middle[3555d1292876aeec]::query::plumbing::query_ensure_error_guaranteed::<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::VecCache<rustc_span[1759084578182824]::def_id::LocalDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>, ()>
49: 0x78597efbe11b - rustc_hir_analysis[3f1e84bf7d1c6f50]::check::wfcheck::check_mod_type_wf
50: 0x78597efbdb6f - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>
51: 0x78597f74533b - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::DefaultCache<rustc_span[1759084578182824]::def_id::LocalModDefId, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
52: 0x78597f7450ed - rustc_query_impl[32db4d22f0b27131]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
53: 0x78597ef648bb - rustc_hir_analysis[3f1e84bf7d1c6f50]::check_crate
54: 0x78597f519f57 - rustc_interface[ca54b24fb50a6540]::passes::run_required_analyses
55: 0x78597face5de - rustc_interface[ca54b24fb50a6540]::passes::analysis
56: 0x78597face5b1 - rustc_query_impl[32db4d22f0b27131]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[32db4d22f0b27131]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>
57: 0x78597fb7a56e - rustc_query_system[4eccca7fcaf3aaf0]::query::plumbing::try_execute_query::<rustc_query_impl[32db4d22f0b27131]::DynamicConfig<rustc_query_system[4eccca7fcaf3aaf0]::query::caches::SingleCache<rustc_middle[3555d1292876aeec]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[32db4d22f0b27131]::plumbing::QueryCtxt, false>
58: 0x78597fb7a24f - rustc_query_impl[32db4d22f0b27131]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
59: 0x78597fa0e91e - rustc_interface[ca54b24fb50a6540]::interface::run_compiler::<core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>, rustc_driver_impl[d25ecc56b0b66360]::run_compiler::{closure#0}>::{closure#1}
60: 0x78597faab714 - std[7ea95b089189ea8]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[ca54b24fb50a6540]::util::run_in_thread_with_globals<rustc_interface[ca54b24fb50a6540]::util::run_in_thread_pool_with_globals<rustc_interface[ca54b24fb50a6540]::interface::run_compiler<core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>, rustc_driver_impl[d25ecc56b0b66360]::run_compiler::{closure#0}>::{closure#1}, core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>>::{closure#0}, core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>>
61: 0x78597faabb28 - <<std[7ea95b089189ea8]::thread::Builder>::spawn_unchecked_<rustc_interface[ca54b24fb50a6540]::util::run_in_thread_with_globals<rustc_interface[ca54b24fb50a6540]::util::run_in_thread_pool_with_globals<rustc_interface[ca54b24fb50a6540]::interface::run_compiler<core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>, rustc_driver_impl[d25ecc56b0b66360]::run_compiler::{closure#0}>::{closure#1}, core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>>::{closure#0}, core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[8e2925c99194b988]::result::Result<(), rustc_span[1759084578182824]::ErrorGuaranteed>>::{closure#1} as core[8e2925c99194b988]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
62: 0x78597faac5eb - std::sys::pal::unix::thread::Thread::new::thread_start::hfa6ba0429166ab6d
63: 0x78598132939d - <unknown>
64: 0x7859813ae49c - <unknown>
65: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (db8043bb1 2024-10-18) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z validate-mir --crate-type lib -Z dump-mir-dir=dir
query stack during panic:
#0 [mir_built] building MIR for `test_correct3`
#1 [has_ffi_unwind_calls] checking if `test_correct3` contains FFI-unwind calls
end of query stack
```
</p>
</details>
<!--
query stack:
#0 [mir_built] building MIR for `test_correct3`
#1 [has_ffi_unwind_calls] checking if `test_correct3` contains FFI-unwind calls
-->
@rustbot label +F-trait_upcasting +F-type_alias_impl_trait | I-ICE,T-compiler,C-bug,F-type_alias_impl_trait,F-trait_upcasting,-Zvalidate-mir,S-bug-has-test | low | Critical |
2,597,168,618 | deno | `deno run -A npm:create-next-app -e hello-world` does not work | Version: Deno 2.0.2
Error:
```
TypeError: Invalid URL: 'hello-world'
at getSerialization (ext:deno_url/00_url.js:98:11)
at new URL (ext:deno_url/00_url.js:405:27)
at createApp (file:///Users/arnauorriols/Library/Caches/deno/npm/registry.npmjs.org/create-next-app/14.2.15/dist/index.js:30034:29)
at run (file:///Users/arnauorriols/Library/Caches/deno/npm/registry.npmjs.org/create-next-app/14.2.15/dist/index.js:30528:27)
at eventLoopTick (ext:core/01_core.js:175:7)
```
| bug,upstream,node compat | low | Critical |
2,597,266,297 | ui | [bug]: Facing installation issue while installing shadcn in nextjs project in MacOs | ### Describe the bug
I am trying to install shadcn in nextjs project but it is giving an error
npm error A complete log of this run can be found in: /Users/chetanbhoasle/.npm/_logs/2024-10-18T11_24_44_051Z-debug-0.log
chetanbhoasle@chetans-MacBook-Air manager-lead % sudo npx shadcn@latest init
node:internal/modules/cjs/loader:1140
const err = new Error(message);
^
Error: Cannot find module '@babel/types'
Require stack:
- /Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-member-expression-to-functions/lib/index.js
- /Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-replace-supers/lib/index.js
- /Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-create-class-features-plugin/lib/decorators.js
- /Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-create-class-features-plugin/lib/index.js
- /Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/plugin-transform-typescript/lib/index.js
at Module._resolveFilename (node:internal/modules/cjs/loader:1140:15)
at Module._load (node:internal/modules/cjs/loader:981:27)
at Module.require (node:internal/modules/cjs/loader:1231:19)
at require (node:internal/modules/helpers:177:18)
at Object.<anonymous> (/Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-member-expression-to-functions/lib/index.js:5:10)
at Module._compile (node:internal/modules/cjs/loader:1364:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1422:10)
at Module.load (node:internal/modules/cjs/loader:1203:32)
at Module._load (node:internal/modules/cjs/loader:1019:12)
at Module.require (node:internal/modules/cjs/loader:1231:19) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-member-expression-to-functions/lib/index.js',
'/Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-replace-supers/lib/index.js',
'/Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-create-class-features-plugin/lib/decorators.js',
'/Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/helper-create-class-features-plugin/lib/index.js',
'/Users/chetanbhoasle/.npm/_npx/d66c5096c7023bfb/node_modules/@babel/plugin-transform-typescript/lib/index.js'
]
}
Node.js v18.20.4
### Affected component/components
application
### How to reproduce
create a new next js project
install shadcn
### Codesandbox/StackBlitz link
bug
### Logs
_No response_
### System Info
```bash
Mac Os M1 8GB 256SSD
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,597,277,123 | godot | Parser error when using a class_name on a ResourceFormatSaver / ResourceFormatLoader if the class references an autoload | ### Tested versions
Reproducible in:
- 4.2.2 stable
- 4.3 stable
- master (04692d83cb8f61002f18ea1d954df8c558ee84f7)
### System information
Tested on Linux & Windows - any renderer
### Issue description
When making a `ResourceFormatSaver` or `ResourceFormatLoader`, the documentation says they need a `class_name` to be registered as valid loaders / savers.
If any of these scripts or their dependencies references an autoload, when launching the project, this error appears: `Parser Error: Identifier not found: Autoload`
Removing the class_name removes this error.
The only workaround is to manually call `ResourceSaver.add_resource_format_saver(saver)` at runtime after all the autoloads are parsed and loaded.
If this cannot be fixed for technical reasons, it would be great if the error message referenced the root of the issue, because nothing in the stack trace leads back to the format savers.
### Steps to reproduce
- Create a custom class that calls an Autoload at any point (in its script, or in any of its dependencies)
- Create a `ResourceFormatSaver` (or loader) referencing that custom class
- Add a `class_name` to the ResourceFormatSaver
- Start the project
### Minimal reproduction project (MRP)
[MRP resource saver](https://github.com/user-attachments/files/17434661/mrp_resource_saver.zip)
| bug,topic:editor,topic:plugin,needs testing | low | Critical |
2,597,294,853 | flutter | Text widget with stroke style renders incorrectly at large font sizes | ### Steps to reproduce
We have discovered a rendering issue with Text widgets under specific conditions in Flutter. This problem affects both Android and iOS platforms, including iOS with Impeller enabled and disabled.
Conditions to reproduce the issue:
1. Set the Text widget's foreground property to `Paint()..style = PaintingStyle.stroke`
2. Set `fontWeight` to `FontWeight.bold`
3. Increase the font size beyond a certain threshold
When these conditions are met, the text rendering becomes distorted at a specific font size threshold. This threshold varies between devices, suggesting a possible dependency on screen size or resolution.
Steps to reproduce:
1. Create a new Flutter project.
2. Add the google_fonts package to your pubspec.yaml.
3. Replace the main.dart content with the provided sample code.
4. Run the app on both Android and iOS devices or simulators.
5. Observe the text rendering as you increase the font size.
6. Note the point at which the text begins to distort or appear cut off.
### Expected results
The Text widget should render correctly at all font sizes, maintaining its shape and not being cut off, regardless of the stroke style and font weight settings.
### Actual results
- The issue is present in many fonts, but some fonts and characters are less affected or unaffected.
- Android and iOS exhibit slightly different behaviors:
- On Android: Text shape becomes distorted.
- On iOS: Text shape distorts and appears partially cut off.
- On iOS, setting `fontWeight` to `FontWeight.normal` resolves the shape distortion but the cut-off issue persists.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:google_fonts/google_fonts.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: const MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text("Font Size 127"),
Text(
'ABC',
style: GoogleFonts.craftyGirls(
fontSize: 127,
fontWeight: FontWeight.bold,
foreground: Paint()
..style = PaintingStyle.stroke
..strokeWidth = 20
),
),
const SizedBox(height: 20),
Text("Font Size 128"),
Text(
'ABC',
style: GoogleFonts.craftyGirls(
fontSize: 128,
fontWeight: FontWeight.bold,
foreground: Paint()
..style = PaintingStyle.stroke
..strokeWidth = 20
),
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
| Pixel 6 (Android 14, physical device) | Pixel Tablet (Android 12, simulator) |
| --------------- | --------------- |
<img width="400" src="https://github.com/user-attachments/assets/7119be21-68d6-43ac-ae65-d2644f0c414a"> | <img width="400" src="https://github.com/user-attachments/assets/40cc1efc-e9f0-4347-8113-34c8f93c36c0">
| iPhone 15 (iOS 17.5, enabled impeller) | iPhone 15 (iOS 17.5, disabled impeller) |
| --------------- | --------------- |
<img width="400" src="https://github.com/user-attachments/assets/dab24892-96bd-40dc-b6a8-7415b01b0f6f"> | <img width="400" src="https://github.com/user-attachments/assets/441cbe80-3208-43da-8968-2cc073151bfb">
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale ja-JP)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.3)
[✓] Android Studio (version 2022.3)
[✓] Android Studio (version 2024.1)
[✓] Android Studio (version 2023.2)
[✓] VS Code (version 1.92.2)
[✓] Connected device (6 available)
```
</details>
| engine,a: typography,c: rendering,has reproducible steps,P2,team-engine,triaged-engine,found in release: 3.24,found in release: 3.27 | low | Minor |
2,597,297,955 | tensorflow | Duplicate Logs After tf.saved_model.save with Custom Logging Configurations | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
Fedora Linux 40
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When using tf.saved_model.save, log messages are duplicated in my application. This happens because TensorFlow adds its own handlers, which interfere with my custom logging setup.
TensorFlow should respect existing logging configurations and avoid adding unnecessary handlers. Some documentation to help align TensorFlow’s logging with user-defined settings would be great.
Setting logger.propagate = False works but requires extra setup.
My suggestion is to check for existing log handlers before adding new ones.
Thanks for looking into this.
### Standalone code to reproduce the issue
Here is the minimal code to reproduce the issue. You can check the code and results in this [colab notebook](https://colab.research.google.com/drive/1IB3IM81FCX9LJ0Ae8uW9p5OCvvJ986fm?usp=sharing) as well.
```python
import logging
import logging.config
import sys
from typing import Tuple
import tensorflow as tf
import numpy as np
def create_keras_sequential_model() -> tf.keras.Model:
return tf.keras.Sequential(
[
tf.keras.layers.Input(shape=(10,)),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dense(1),
]
)
def generate_random_data(num_samples: int = 1000) -> Tuple[np.ndarray, np.ndarray]:
x = np.random.random((num_samples, 10))
y = np.random.random((num_samples, 1))
return x, y
def compile_model(model: tf.keras.Model) -> None:
model.compile(optimizer="sgd", loss="mean_absolute_error", metrics=["mae"])
def fit_model(
model: tf.keras.Model, x: np.ndarray, y: np.ndarray, epochs: int = 10
) -> None:
model.fit(x, y, epochs=epochs, verbose=0)
def create_compile_return_model() -> None:
model = create_keras_sequential_model()
compile_model(model)
x, y = generate_random_data()
fit_model(model, x, y)
return model
def get_custom_formatter_logger() -> logging.Logger:
log_format_with_time = "%(asctime)s - %(levelname)s %(message)s"
logHandler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter(log_format_with_time)
logHandler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(logHandler)
# logger.propagate = False #If you add this, then logs are not duplicated
return logger
if __name__ == "__main__":
logger = get_custom_formatter_logger()
logger.info("TF version: %s", tf.__version__)
logger.info("Started")
keras_model = create_compile_return_model()
logger.info("Model created, compiled and trained. Now saving the model!")
tf.saved_model.save(keras_model, "my_fake_model")
logger.info("Model saved!")
logger.info("Logs are now duplicated!")
logger.info("Finished")
```
### Relevant log output
```
2024-10-18 13:23:10,159 - INFO TF version: 2.17.0
2024-10-18 13:23:10,160 - INFO Started
2024-10-18 13:23:11,400 - INFO Model created, compiled and trained. Now saving the model!
2024-10-18 13:23:11,832 - INFO Model saved!
INFO:__main__:Model saved!
2024-10-18 13:23:11,832 - INFO Logs are now duplicated!
INFO:__main__:Logs are now duplicated!
2024-10-18 13:23:11,832 - INFO Finished
INFO:__main__:Finished
```
| type:others,comp:apis,2.17 | medium | Critical |
2,597,306,216 | rust | projecting to assoc type of supertrait that is implemented differently for trait object goes wrong | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code: [playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=06b34ccd6cabc944d63ff50b2fdcc472)
```rust
#![feature(ptr_metadata)]
use std::ptr::Pointee;
trait Trait: Pointee {
fn meta(&self) -> Self::Metadata;
}
impl Trait for () {
fn meta(&self) {}
}
fn main() {
let d: &dyn Trait<Metadata = ()> = &();
let m = d.meta();
dbg!(m);
}
```
I expected to see this happen: The type of `m` is `()` (or `Trait` is dyn incompatible).
Instead, this happened: The type of `m` is `DynMetadata<dyn Trait<Metadata = ()>>`.
Presumably this affects all traits with `#[rustc_deny_explicit_impl(implement_via_object = false)]` and associated types.
Miri does report UB for this code:
```
error: Undefined Behavior: calling a function with return type () passing return place of type std::ptr::DynMetadata<dyn Trait<Metadata = ()>>
--> src/main.rs:15:13
|
15 | let m = d.meta();
| ^^^^^^^^ calling a function with return type () passing return place of type std::ptr::DynMetadata<dyn Trait<Metadata = ()>>
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= help: this means these two types are not *guaranteed* to be ABI-compatible across all targets
= help: if you think this code should be accepted anyway, please report an issue with Miri
= note: BACKTRACE:
= note: inside `main` at src/main.rs:15:13: 15:21
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
playground nightly
```
Build using the Nightly version: 1.84.0-nightly
(2024-10-17 3ed6e3cc69857129c1d3)
```
@rustbot label A-trait-objects A-associated-items T-types I-unsound requires-nightly -needs-triage
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"compiler-errors"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-associated-items,I-unsound,C-bug,requires-nightly,T-types,A-trait-objects | low | Critical |
2,597,447,181 | rust | Volatile reads and writes on aarch64 sometimes generate instructions not suitable for MMIO in protected VMs | `core::ptr::write_volatile` and `core::ptr::read_volatile` are documented as being intended to act on I/O memory, i.e. for MMIO. These are indeed widely used by many crates providing drivers for MMIO devices across the ecosystem.
When running in a virtual machine, MMIO must be emulated by the hypervisor. This is done (on aarch64 at least) by having the MMIO region unmapped in the stage 2 page table, which results in a data abort to the hypervisor when the VM attempts to read or write the MMIO region. The hypervisor then decodes the exception syndrome register (`esr_el2`) and uses the fault address register (`far_el2`) to determine which MMIO address is being accessed and perform the appropriate operation in response.
Unfortunately, rustc sometimes compiles `core::ptr::write_volatile` on aarch64 to something like `str w9, [x0], #4`. We've seen this happen particularly since Rust 1.78, but it may be possible with earlier Rust versions too. The problem with this is that this post-addressing mode is performing register writeback (in this case, incrementing `x0` by 4), and so [doesn't set the exception syndrome register](https://developer.arm.com/documentation/ddi0595/2020-12/AArch64-Registers/ESR-EL1--Exception-Syndrome-Register--EL1-?lang=en#fieldset_0-24_0_15-24_24). This prevents the hypervisor from emulating the MMIO access, as it has no way of decoding the instruction syndrome or finding the faulting address.
In an unprotected VM (e.g. regular KVM), it is possible for the VMM to work around this by reading the guest VM's memory to find the relevant instruction, decoding the instruction manually, and finding the MMIO address that way. This has a performance overhead and adds extra complexity. In the case of a protected VM where the host doesn't have access to the guest VM's memory (e.g. protected KVM), this is not possible as the VMM is not able to read the guest VM's memory and so cannot do instruction decoding. There is thus no way to emulate these attempted MMIO accesses in a protected VM on aarch64.
The net result of this is that instructions which perform register writeback (e.g. post-increment addressing modes) are not suitable for MMIO in aarch64 VMs. This is arguably a flaw in the aarch64 architecture, but as that's not feasible to fix at this point it must be fixed in the compiler instead. rustc should therefore avoid generating such instructions for `volatile_read` and `volatile_write` calls.
The only alternative I can see to fixing this in rustc is for every crate which performs MMIO to use inline assembly rather than `volatile_read` / `volatile_write`, but that is not a very feasible or scalable solution. | A-LLVM,A-codegen,T-compiler,C-bug,S-has-mcve,O-AArch64 | low | Major |
2,597,515,612 | PowerToys | Mini window overlayed on ALL APPLICATIONS | ### Microsoft PowerToys version
0.85.1
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
When you start PowerToys a small but annoying window appears in the top left corner, no matter how many times you reopen the app.
no matter what the application is, it overlays on top of any application [PowerToysReport_2024-10-18-07-22-43.zip](https://github.com/user-attachments/files/17436116/PowerToysReport_2024-10-18-07-22-43.zip)
attached image

### ✔️ Expected Behavior
This is how it looks once i closed the application

### ❌ Actual Behavior
it overlays on top of any application

### Other Software
every software installed on my computer has this windows overlayed | Issue-Bug,Area-Runner,Needs-Repro | low | Minor |
2,597,548,071 | langchain | Parsing a dictionary of lists problem | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I want to get "a" as a key in ppp , but code (using Dict) below fails:
```
import os
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini-2024-07-18", temperature=0.0)
class A(BaseModel):
a_1: str
a_2: str
r: str
class B(BaseModel):
b_1: str
b_2: str
r: str
class C(BaseModel):
ccc:List[A]
ppp: Dict[str, List[B]]
structured_llm = model.with_structured_output(C)
response = structured_llm.invoke(prompt)
```
### Error Message and Stack Trace (if applicable)
ValidationError: 1 validation error for C
ppp
Field required [type=missing, input_value={'ccc': [{'a_1': 'Price',...tant to Battery Life'}]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/missing
### Description
I have a code that works:
import os
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini-2024-07-18", temperature=0.0)
class A(BaseModel):
a_1: str
a_2: str
r: str
class B(BaseModel):
a: str
b_1: str
b_2: str
r: str
class C(BaseModel):
ccc:List[A]
ppp: List[B]
structured_llm = model.with_structured_output(C)
response = structured_llm.invoke(prompt)
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.17
> langsmith: 0.1.136
> langchain_openai: 0.1.21
> langchain_text_splitters: 0.2.4
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.52.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug,stale | low | Critical |
2,597,582,200 | go | x/website: mod tidy -diff flag missing from https://go.dev/ref/mod#go-mod-tidy | ### Go version
go version go1.23.2 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
N/A
```
### What did you do?
I happened to know that `go mod tidy -diff` was [added in 1.23](https://github.com/golang/go/issues/27005) 🎉 , but a quick web search for `go mod tidy` returned https://go.dev/ref/mod#go-mod-tidy, which does not list the new `-diff` flag. It is correctly shown in `cmd/go` help text:
```console
$ go mod tidy -help
usage: go mod tidy [-e] [-v] [-x] [-diff] [-go=version] [-compat=version]
Run 'go help mod tidy' for details.
$ go help mod tidy
usage: go mod tidy [-e] [-v] [-x] [-diff] [-go=version] [-compat=version]
Tidy makes sure go.mod matches the source code in the module.
It adds any missing modules necessary to build the current module's
packages and dependencies, and it removes unused modules that
don't provide any relevant packages. It also adds any missing entries
to go.sum and removes any unnecessary ones.
The -v flag causes tidy to print information about removed modules
to standard error.
The -e flag causes tidy to attempt to proceed despite errors
encountered while loading packages.
The -diff flag causes tidy not to modify go.mod or go.sum but
instead print the necessary changes as a unified diff. It exits
with a non-zero code if the diff is not empty.
The -go flag causes tidy to update the 'go' directive in the go.mod
file to the given version, which may change which module dependencies
are retained as explicit requirements in the go.mod file.
(Go versions 1.17 and higher retain more requirements in order to
support lazy module loading.)
The -compat flag preserves any additional checksums needed for the
'go' command from the indicated major Go release to successfully load
the module graph, and causes tidy to error out if that version of the
'go' command would load any imported package from a different module
version. By default, tidy acts as if the -compat flag were set to the
version prior to the one indicated by the 'go' directive in the go.mod
file.
The -x flag causes tidy to print the commands download executes.
See https://golang.org/ref/mod#go-mod-tidy for more about 'go mod tidy'.
```
### What did you see happen?
The command help also points to the website, which doesn't list the flag.
### What did you expect to see?
`-diff` flag documentation added on the website. 😄 Apologies if this isn't the correct place to file this sort of issue (was just following the directions at the bottom of https://github.com/golang/website/blob/master/README.md#report-issues--send-patches). | help wanted,NeedsFix,website | low | Critical |
2,597,582,674 | flutter | `CircularProgressIndicator` stretches when given tight constraints with `FlexFit.tight` | ### Steps to reproduce
Place a CircularProgressIndicator inside a Flex with FlexFit.tight (or any other widget that gives it tight constraints like a SizedBox)
### Expected results
The indicator should either scale keeping the aspect ratio or not scale at all.
### Actual results
The indicator is stretched to fill the constraints on one of the axes.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: Align(
alignment: Alignment.center,
child: Column(
crossAxisAlignment: CrossAxisAlignment.center,
children: [
Flexible(
fit: FlexFit.tight,
child: CircularProgressIndicator(),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/5d9fff1e-8cea-4292-a5f3-b7c762472e0a
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale en-GB)
• Flutter version 3.24.3 on channel stable at /Users/alex/workspace/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (5 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
• Android SDK at /Users/alex/Library/Android/sdk
• Platform android-34, build-tools 33.0.1
• ANDROID_HOME = /Users/alex/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✗] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.94.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (3 available)
• Alex’s iPhone (mobile) • 00008130-001A310C0491401C • ios • iOS 18.0.1 22A3370
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0.1 24A348 darwin-arm64
[✓] Network resources
• All expected network resources are available.
```
</details>
| framework,f: material design,d: api docs,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Minor |
2,597,632,343 | langchain | Pydantic validation error on angchain_community.chat_models.ChatLiteLLMRouter | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
ChatLiteLLMRouter
```python
from langchain_core.prompts import ChatPromptTemplate
from litellm.router import Router
from langchain_community.chat_models import ChatLiteLLMRouter
_prompt = ChatPromptTemplate.from_messages(
[
("human", "You are an asistant. {input}"),
]
)
model_list = [
{
"model_name": "claude_3_haiku",
"litellm_params": {
"model": "bedrock/anthropic.claude-3-haiku-20240307-v1:0",
},
},
{
"model_name": "claude_3_sonnet",
"litellm_params": {
"model": "bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
},
},
]
litellm_router = Router(model_list=model_list)
_modellm=ChatLiteLLMRouter(router=litellm_router)
```
### Error Message and Stack Trace (if applicable)
pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatLiteLLMRouter
router
Field required [type=missing, input_value={'name': None, 'cache': N...ogether_ai_api_key': ''}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/missing
### Description
I'm trying to create an instance of ChatLiteLLMRouter object, passing the required parameter router as:
```python
_modellm=ChatLiteLLMRouter(router=litellm_router)
```
expected: the ChatLiteLLMRouter object is created
actual: exception raised because do not detect router parameter being passed:
```python
pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatLiteLLMRouter
router
Field required [type=missing, input_value={'name': None, 'cache': N...ogether_ai_api_key': ''}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/missing
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Mar 2 00:30:59 UTC 2022
> Python Version: 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.136
> langchain_aws: 0.2.2
> langchain_cli: 0.0.31
> langchain_text_splitters: 0.3.0
> langserve: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> boto3: 1.35.43
> dataclasses-json: 0.6.7
> fastapi: 0.115.2
> gitpython: 3.1.43
> gritql: 0.1.5
> httpx: 0.27.2
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> tomlkit: 0.12.5
> typer[all]: Installed. No version info available.
> typing-extensions: 4.12.2
> uvicorn: 0.23.2 | 🤖:bug | low | Critical |
2,597,635,032 | pytorch | DTensor RNG state for non CUDA backends | ### 🐛 Describe the bug
DTensor random numbers provide an offset based RNG state tracker OffsetBasedRNGTracker for CUDA. However, for CPU, this offset based RNG state tracker is not available, and it produces a warning -
```
$ !cat dtensor_test.py
import torch
import os
import torch
from torch.distributed._tensor import init_device_mesh, Shard, distribute_tensor
mesh = init_device_mesh("cpu", (int(os.environ["WORLD_SIZE"]),))
t = torch.randn(100, 8)
dt = distribute_tensor(t, mesh, [Shard(dim=0)])
dt.uniform_(0, 1)
```
The warning comes as -
```
$ !torchrun --nnodes=1 --nproc-per-node=4 dtensor_test.py
...
warnings.warn(
/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/_random.py:45: UserWarning: DTensor random operators may not have complete support on cpu device mesh
```
Random ops mesh support for a backend is checked with is_rng_supported_mesh(), which checks the presence of hasattr(device_handle, “set_rng_state”).
If a backend uses CPU RNG state and has a set_rng_state() implemented, it sets is_rng_supported_mesh() to be True and DTensor random mechanism tries to use OffsetBasedRNGTracker. The seed and offset based APIs assume the RNG state to be a CUDA like offset based. CPU RNG state doesn’t work and fails as shown below -
```
[rank0]: File ".../.local/lib/python3.10/site-packages/torch/distributed/_tensor/random.py", line 176, in _distribute_region
[rank0]: old_offset = self.get_offset("parallel-rng")
[rank0]: File ".../.local/lib/python3.10/site-packages/torch/distributed/_tensor/random.py", line 195, in get_offset
[rank0]: return int(offset_tensor.item())
[rank0]: RuntimeError: a Tensor with 631 elements cannot be converted to Scalar
```
It would be better if is_rng_supported_mesh() has an explicit support from the backend RNG, maybe something like an hasattr(device_handle, "rng_mesh_supported"). This way, backends that use CPU RNG state and has the set_rng_state() implemented will not try to use the OffsetBasedRNGTracker.
Also, is there a plan to support the DTensor random ops for CPU better? It will not be offset based RNG tracker, but the seed can be advanced correctly for the CPU RNG state after the DTensor random op.
### Versions
7s
!python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0.dev20240923+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.3.0
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.6.0.dev20240923+cu121
[pip3] torchaudio==2.5.0.dev20240923+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0.dev20240923+cu121
[conda] Could not collect
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,597,753,427 | rust | regression: ArArchiveBuilder can no longer handle output to temporary files managed by Python | <!--
Thank you for filing a regression report! 🐛 A regression is something that changed between versions of Rust but was not supposed to.
Please provide a short summary of the regression, along with any information you feel is relevant to replicate it.
-->
### Code
I tried this code:
```python
x = ['rustc.EXE', '--print=native-static-libs', '--target', 'x86_64-pc-windows-msvc', '--crate-type', 'staticlib', 'nul', '-o']
from tempfile import TemporaryFile()
y = TemporaryFile()
x += [y.name]
import subprocess
subprocess.run(x, check=True, text=True)
```
I expected to see this happen:
```
note: Link against the following native artifacts when linking against this static library. The order and any duplication can be significant on some platforms.
note: native-static-libs: kernel32.lib advapi32.lib ntdll.lib userenv.lib ws2_32.lib dbghelp.lib /defaultlib:msvcrt
```
Instead, this happened:
```
error: failed to build archive at `C:\Users\Amalia\AppData\Local\Temp\tmpk51p4sru`: failed to rename archive file: Acceso denegado. (os error 5)
error: aborting due to 1 previous error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python310\lib\subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['rustc.EXE', '--print=native-static-libs', '--target', 'x86_64-pc-windows-msvc', '--crate-type', 'staticlib', 'nul', '-o', 'C:\\Users\\Amalia\\AppData\\Local\\Temp\\tmpk51p4sru']' returned non-zero exit status 1.
```
This breaks librsvg building on MSVC, because I use this technique to [query the `native-static-libs` for the Meson dependency setup](https://gitlab.gnome.org/GNOME/librsvg/-/blob/main/meson/query-rustc.py?ref_type=heads#L67).
I traced the issue to the following pull request: https://github.com/rust-lang/rust/pull/122723
### Version it worked on
It most recently worked on: 1.77.2
### Version with regression
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-pc-windows-msvc
release: 1.82.0
LLVM version: 19.1.1
```
<!--
Did the compiler crash? If so, please provide a backtrace.
-->
### Backtrace
N/A
<!--
If you know when this regression occurred, please add a line like below, replacing `{channel}` with one of stable, beta, or nightly.
-->
@rustbot modify labels: +regression-from-stable-to-stable -regression-untriaged
cc @sdroege @nirbheek @federicomenaquintero | C-discussion | low | Critical |
2,597,862,194 | angular | Using a track function directly in the new @for loop is not supported nor compiled | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
The following code
```
@for (a of b; track method) {}
```
and having
```
method = (index, item) => { return index }
```
The method is actually not called.
It comes to a surprise because the language service will link properly and everything but it doesn't actually run the tracking method.
For someone who comes from an `*ngFor`, they will blindly switch this `trackBy` to `track` and things will start failing at runtime.
If this is the "norm" then maybe the documentation should be reflected that you have to call `method($index, a)`, but this is not obvious
### Please provide a link to a minimal reproduction of the bug
https://angular-e967ee.stackblitz.io
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 17.3.2
Node: 20.11.1
Package Manager: npm 8.5.5
OS: win32 x64
Angular: 17.3.2
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, google-maps, language-service, localize, material
... platform-browser, platform-browser-dynamic, platform-server
... router, ssr
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1703.2
@angular-devkit/build-angular 17.3.2
@angular-devkit/core 17.3.2
@angular-devkit/schematics 17.3.2
@schematics/angular 17.3.2
rxjs 7.8.1
typescript 5.2.2
zone.js 0.14.4
### Anything else?
_No response_ | area: core,P3,bug,core: control flow | low | Critical |
2,597,932,158 | langchain | include_raw=True disable streaming | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hello, there is no way to retrieve the tokens used (usage) with Azure when using streaming and structured_output.
```python
structured_llm = llm.with_structured_output(Joke, include_raw=True)
for chunk in structured_llm.stream("Tell me a joke about cats")
print(chunk)
```
https://python.langchain.com/docs/how_to/structured_output/
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This code disables streaming. We have to wait for the complete generation to get a response.
Expected behavior: We should be able to use streaming and retrieve the usage metadata at the end.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Oct 2 16:16:55 UTC 2
> Python Version: 3.11.10 | packaged by conda-forge | (main, Sep 22 2024, 14:10:38) [GCC 13.3.0]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.3
> langsmith: 0.1.128
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: Installed. No version info available.
> httpx: 0.27.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.52.0
> orjson: 3.10.5
> packaging: 24.1
> pydantic: 2.7.4
> PyYAML: 6.0.1
> requests: 2.32.3
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,597,953,873 | ollama | Support directly running GGUF files without importing | In llama.cpp we can directly run models with `llama-cli -m your_model.gguf ` without having to import the model, It would be great if we can do the same with ollama. | feature request | low | Minor |
2,597,955,756 | pytorch | quantile does not work with float16: throws RuntimeError: quantile() input tensor must be either float or double dtype | ### 🐛 Describe the bug
import numpy as np
import torch
data = torch.tensor(np.array([[1, 2], [3, 4], [5, 6]], dtype=np.float32))
q1 = torch.quantile(data, 0.25, dim=0) # works
data = torch.tensor(np.array([[1, 2], [3, 4], [5, 6]], dtype=np.float16))
q1 = torch.quantile(data, 0.25, dim=0) # throws RuntimeError: quantile() input tensor must be either float or double dtype
### float16 is a valid float type, the above call to quantile with float16 should succeed.
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1062-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 2 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.5.40
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.3.1+cu121
[pip3] torchaudio==2.3.1+cu121
[pip3] torchvision==0.18.1+cu121
[pip3] triton==2.3.1
[conda] No relevant packages
cc @ptrblck @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: cuda,module: cpu,triaged,module: half | low | Critical |
2,597,982,623 | deno | VSCode diagnostics for auto-detected JS do not disappear when explicitly correcting the language mode to "JSON" | Version: Deno 2.0.0
In VSCode, I hit Ctrl+N to open a new unsaved buffer, then paste some JSON. VSCode autodetects the language as Javascript, so Deno generates diagnostics about missing semicolons, etc. Then I manually set language mode to JSON to fix syntax highlighting, yet the Deno diagnostics do not disappear.
Not sure if this is a VSCode bug for still asking Deno for diagnostics when it shouldn't. Or if it's a Deno bug where it does not remove the old diagnostics once VSCode changes the language mode.
The images below show my editor buffer, my problems pane, and the language mode, plus you can see the tooltip showing that VSCode auto-detected the language as Javascript.



Language server init w/version #
```
Starting Deno language server...
version: 2.0.0 (release, x86_64-pc-windows-msvc)
executable: C:\Users\abradley\AppData\Local\Microsoft\WinGet\Links\deno.EXE
Connected to "Visual Studio Code" 1.94.2
```
| needs investigation,lsp | low | Critical |
2,597,986,525 | flutter | rrect paths draw rects when their width or height is zero | ## reproduction
```c++
TEST_P(AiksTest, NoDimplesInRRectPath) {
Scalar width = 200.f;
Scalar height = 60.f;
Scalar corner = 1.f;
auto callback = [&]() -> sk_sp<DisplayList> {
if (AiksTest::ImGuiBegin("Controls", nullptr,
ImGuiWindowFlags_AlwaysAutoResize)) {
ImGui::SliderFloat("width", &width, 0, 200);
ImGui::SliderFloat("height", &height, 0, 200);
ImGui::SliderFloat("corner", &corner, 0, 1);
ImGui::End();
}
DisplayListBuilder builder;
builder.Scale(GetContentScale().x, GetContentScale().y);
DlPaint background_paint;
background_paint.setColor(DlColor(1, 0.1, 0.1, 0.1, DlColorSpace::kSRGB));
builder.DrawPaint(background_paint);
std::vector<DlColor> colors = {DlColor::kRed(), DlColor::kBlue()};
std::vector<Scalar> stops = {0.0, 1.0};
DlPaint paint;
auto gradient = DlColorSource::MakeLinear(
{0, 0}, {200, 200}, 2, colors.data(), stops.data(), DlTileMode::kClamp);
paint.setColorSource(gradient);
paint.setColor(DlColor::kWhite());
paint.setDrawStyle(DlDrawStyle::kStroke);
paint.setStrokeWidth(20);
builder.Save();
builder.Translate(100, 100);
Scalar corner_x = ((1 - corner) * 50) + 50;
Scalar corner_y = corner * 50 + 50;
SkRRect rrect = SkRRect::MakeRectXY(SkRect::MakeXYWH(0, 0, width, height),
corner_x, corner_y);
builder.DrawRRect(rrect, paint);
builder.Restore();
return builder.Build();
};
ASSERT_TRUE(OpenPlaygroundHere(callback));
}
```
## seen
<img width="721" alt="Screenshot 2024-10-18 at 9 51 44 AM" src="https://github.com/user-attachments/assets/536f18c5-3b4b-4dd1-b0ee-905cfa78a67f">
## expected
<img width="732" alt="Screenshot 2024-10-18 at 9 51 53 AM" src="https://github.com/user-attachments/assets/8e297758-c114-44e3-89b6-d5bb72ad2be6">
| P2,e: impeller,team-engine,triaged-engine | low | Major |
2,598,000,176 | godot | SceneTreeTimer breaks when playing a video in loop | ### Tested versions
v4.3 stable
### System information
Windows 10, Ryzen 3 3300x, 16gb RAM 2666mhz, GTX 1660 6gb.
### Issue description
If the video is looping, for whatever reason, the timer breaks and never returns the timeout signal.
```
@onready var _video: VideoStreamPlayer = $video
var timer : SceneTreeTimer
func _ready() -> void:
print('Starting')
await _execute_and_stop()
print('\n\nFinished!\n\n')
func _execute_and_stop() -> void:
_video.play()
timer = get_tree().create_timer(10, true) # Error.
#timer = get_tree().create_timer(10, true, true) # ok.
await timer.timeout
_video.stop()
```
Playing a video with 6.9166667 sec (166 frames at 24fps), using a timer with 10 sec, that's the log I get when printing the 'time_left' in the SceneTreeTimer.
```
3.30371609090929
3.29765548484868
3.29159487878808
3.28553427272747
3.27947366666686
3.27341306060626
3.26735245454565
3.26129184848505
3.25523124242444
3.24917063636383
3.24311003030323
3.23704942424262
3.23098881818202
3.22492821212141
3.2188676060608
3.2128070000002
3.20674639393959
3.20068578787899
3.19462518181838
3.18856457575778
3.18250396969717
3.17644336363656
3.17038275757596
3.16432215151535
3.15826154545475
3.15220093939414
3.14614033333353
3.14007972727293
3.13401912121232
3.12795851515172
3.12189790909111
3.1158373030305
3.1097766969699
3.10371609090929
3.09765548484869
3.09159487878808
3.08553427272748
3.07947366666687
3.07341306060626
3.06735245454566
3.06129184848505
3.05523124242445
14745887137786335000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
14745887137786335000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
14745887137786335000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
14745887137786335000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
14745887137786335000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
```
In the MRP, there are 2 videos. For reasons I don't understand, the timer works ok when updating in the physics frame, but not in the process frame. Also, the first video1 works ok, but the video2 breaks the timer...
### Steps to reproduce
As shown in the script above.
### Minimal reproduction project (MRP)
[demo_timer.zip](https://github.com/user-attachments/files/17438564/demo_timer.zip)
| bug,topic:core,needs testing | low | Critical |
2,598,039,347 | ollama | Last character being truncated by stop sequence | ### What is the issue?
When running inference in raw mode with '\n\n' as a stop sequence, it seems like punctuation is being removed with the stop sequence. I assume this is because of a bug with how partial stop sequences are handled.
### OS
Linux
### GPU
Other
### CPU
AMD
### Ollama version
0.3.11 | bug,needs more info | low | Critical |
2,598,045,173 | ollama | Return Triggered Stop Sequence | It would be extremely useful if the API response contained *which* stop sequence was triggered if there are multiple listed. For example, if you have the model's default stop sequence and a custom one which you want to trigger an action, you currently need to carefully choose the text leading up to it so you can determine if it was the custom stop sequence or not. It would be way easier to do this sort of thing if the api simply told you which stop sequence was triggered instead of `"done_reason": "stop"`. | feature request,api | low | Minor |
2,598,045,686 | Python | there is no recursion folder | ### Feature description
there is no recursion related problems and there is no separate folder. | enhancement | medium | Minor |
2,598,052,005 | flutter | DropdownMenu, Menu, and ComboBox widget requires roles, ARIA properties, and keyboard support (Accessibility) | ### Use case
The DropdownMenu widget requires roles, ARIA properties, and expected keyboard support to meet accessibility compliance requirements outlined in the Web Content Accessibility Guidelines (WCAG).
### Proposal
### WAI-ARIA Roles, States, and Properties
**Combobox element**
- Use `role="combobox"` to define the container element, and `"tabindex="0"` to include the checkbox in the Tab sequence.
- Either use `aria-label` or `aria-labelledby="[IDREF]"` on the combobox element, with [IDREF] referencing the unique ID of the label container
- Use `aria-expanded="true"` on the container element when the dropdown panel is expanded, and `aria-expanded="false"` when collapsed.
- Use `aria-controls="[IDREF]"` on the container element, with [IDREF] referencing the unique ID of the list container that displays the options.
- Use `aria-haspopup="listbox"` on the container to signify that the combobox has a pop-up list.
- Use `aria-activedescendant="[IDREF]"` on the container, with [IDREF] referencing the unique ID of the option in the listbox that has keyboard focus (when applicable)
- When a combobox element is in a disabled state, use `aria-disabled="true"` on the overall combobox control so screen reader users are still allowed to access the combobox, announcing that it is disabled. Use the `disabled` attribute on the individual options.
**Listbox popup**
- Use `role="listbox"` for the element that contains all of the options.
- Use `role="option"` for each option contained within the listbox.
- Use `aria-selected="true"` on an option in the listbox when it is visually highlighted as selected.
- Use `aria-multiselectable="true"` on the listbox element if the listbox supports selection of more than one option.
### Combobox Keyboard Interaction
When focus is in the combobox:
- **Down Arrow**: Places focus on the first focusable element in the popup.
- **Up Arrow**: If the popup is available, places focus on the last focusable element in the popup.
- **Escape**: Dismisses the popup if it is visible. Optionally, if the popup is hidden before Escape is pressed, clears the combobox.
- **Enter**: If the combobox is editable and an autocomplete suggestion is selected in the popup, accepts the suggestion either by placing the input cursor at the end of the accepted value in the combobox or by performing a default action on the value. For example, in a messaging application, the default action may be to add the accepted value to a list of message recipients and then clear the combobox so the user can add another recipient.
- **Printable Characters**: If the combobox is not editable, moves focus to a value that starts with the typed characters.
### Listbox Popup Keyboard Interaction
When focus is in a listbox popup:
- **Enter**: Accepts the focused option in the listbox by closing the popup, placing the accepted value in the combobox, and if the combobox is editable, placing the input cursor at the end of the value.
- **Escape**: Closes the popup and returns focus to the combobox. Optionally, if the combobox is editable, clears the contents of the combobox.
- **Down Arrow**: Moves focus to and selects the next option. If focus is on the last option, either returns focus to the combobox or does nothing.
- **Up Arrow**: Moves focus to and selects the previous option. If focus is on the first option, either returns focus to the combobox or does nothing.
- **Right Arrow**: If the combobox is editable, returns focus to the combobox without closing the popup and moves the input cursor one character to the right. If the input cursor is on the right-most character, the cursor does not move.
- **Left Arrow**: If the combobox is editable, returns focus to the combobox without closing the popup and moves the input cursor one character to the left. If the input cursor is on the left-most character, the cursor does not move.
- **Home**: Either moves focus to and selects the first option or, if the combobox is editable, returns focus to the combobox and places the cursor on the first character.
- **End**: Either moves focus to the last option or, if the combobox is editable, returns focus to the combobox and places the cursor after the last character.
- **Any printable character**: Moves focus to the next option with a name that starts with the characters typed. | c: new feature,framework,f: material design,a: accessibility,platform-web,c: proposal,P1,customer: castaway,team-accessibility,triaged-accessibility | medium | Major |
2,598,061,070 | electron | calling setJumpList/setUserTasks silently fails AFTER window creation unless at least one valid call was made BEFORE window creation | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.0.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 Pro Version 10.0.22631 Build 22631
### What arch are you using?
x64
### Last Known Working Electron version
N/A
### Expected Behavior
`setUserTasks`/`setJumpList` functions can update the Windows jump list with a single call after the BrowserWindow is instantiated
### Actual Behavior
`setUserTasks`/`setJumpList` functions must be called at least once before the BrowserWindow is instantiated, then called a second time afterwards.
### Testcase Gist URL
https://gist.github.com/slyboots/5b3a46ff41f0894e13bac4187b068a90
### Additional Information
There are a few other closed issues concerning similar behavior as this issue (setJumpList/setUserTasks not working), however this is distinctly concerned with the inconsistent behavior of those functions when called based on the timing/number of calls made prior in the application lifecycle.
### Edit:
additional note: the reproduction gist was made using Electron Fiddle to make reproducing on y'alls end easier. | platform/windows,bug :beetle:,has-repro-gist,stale,31-x-y | low | Critical |
2,598,061,684 | pytorch | `torch.package` bug | ### 🐛 Describe the bug
Triggered the assert trying to `package` a model.
https://github.com/pytorch/pytorch/blob/666572d819945ccdb66da7f73e44c012b667b182/torch/package/package_exporter.py#L984
Specifically
```python
def _write(self, filename, str_or_bytes):
if filename in self._written_files:
raise AssertionError(
f"Tried to write file '{filename}', but it already exists in this archive. "
"Please file a bug."
)
self._written_files.add(filename)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A1000 6GB Laptop GPU
Nvidia driver version: 537.77
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.0
[pip3] torchmetrics==1.4.1
[pip3] triton==3.1.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
``` | oncall: package/deploy | low | Critical |
2,598,077,909 | deno | Issue with node:readline and/or incompatibility between Deno 2 and Inquirer | Hi, sorry to open a vague issue (but there's somewhat of a code repro.) With Deno 2 announcement, devs expect packages targeting Node to work inside Deno. So I received this bug report on Inquirer's repo https://github.com/SBoudrias/Inquirer.js/issues/1574
The repro code is about (minus I think the import which I'm not sure how it'd translate for Deno.)
```js
import { select, confirm } from '@inquirer/prompts';
const answer = await select<string>({
message: 'Which type of project?',
choices: ['Docker', 'PHP', 'Laravel', 'Laravel Sail', 'Java', 'Python', 'React', 'Flutter', 'Custom'],
});
console.log(answer); // Docker
const shouldProceed = await confirm({ message: "Do you want to proceed?" }); // Here it ask double Enter to take the confirm answer
console.log("Should proceed?", shouldProceed);
```
And the dev included that screencast
[Screencast from 2024-10-05 23-49-18.webm](https://github.com/user-attachments/assets/6388aff2-1a6d-4da8-ae3b-674842782fd9)
So, to me it looks like a potential issue when readlines run one after the other 🤷🏻 Happy to provide further help, but I understand if it's hard to troubleshoot an underspecified problem (like I have no idea which version of Deno this ran into.) Figure it was still worth a shoutout here in case you're unaware of some node compatibility issues remaining. | bug,node compat | low | Critical |
2,598,079,456 | deno | [nodemon] Internal watch failed: Cannot convert a BigInt value to a number | ## Version
```bash
deno --version
deno 2.0.2 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.13-rusty
typescript 5.6.2
```
## My Issue
Ok, i installed Deno today and I opened one of my projects and run `deno task dev`
but I cross with this error from `nodemon`
`[nodemon] Internal watch failed: Cannot convert a BigInt value to a number`
## Screenshot
also I try to run `bun dev` and it worked, but it failed with `deno task dev`

## package.json
```json
// ...
"scripts": {
"start": "node app.js",
"dev": "nodemon app.js",
"build": "node build.js"
},
// ...
"devDependencies": {
"nodemon": "^2.0.15"
},
"engines": {
"node": "18.7.0",
"npm": "8.15.0"
}
```
## nodemon.json
```json
{
"verbose": true,
"ignore": ["*.test.js", "node_modules"],
"watch": ["src"],
"ext": "js json md pug"
}
``` | bug,node compat | low | Critical |
2,598,095,584 | react | [Compiler Bug]: Memoizing a debounced function complains about ref access | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgBMEAzAQygBsCSoA7OXASwjvwFkBPAQU0wAoAlPmAAdNvjiswBQhAC2AUUoAlUvgC8+NWUYA6KGARqSQ8fknSClJjIDyUXAhgnN2hLtwGjJsxPOWdDL4APpgcDAQlJQAKhAAQhC4uApuhggAwmTRAEa6ANb8QpoAfCIBFnJKqqR6cLAwCHRe4ZHRcfxiEhYWKZjI+Db2js4mdQ1NLRFRlAASCEwA5gAWuAA0FT05CMtkAG4sMAOi4PIQScsnG934AL6CANwBt2v4ANoAuo8BUkEErTM4olkqktOl2AgzvxNsUNGViDloAwEPwwtN2gkkil5K8AIwABnxgmuPXeaLasUxIPkHxJ+G+dHEAUauFgbAAPIQmHsSvNohB8AB1HCUQjsgD0XJ5Tzot3EIFuQA
### Repro steps
Attempting to memoize a debounced function that access a ref gives me this react compiler lint violation:
> Ref values (the `current` property) may not be accessed during render.
However, the underlying function isn't called during render, it's simply referenced, and only called when the memoized, debounced function is called. In my case, it's called in an event handler, which is safe.
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1 | Type: Bug,Component: Optimizing Compiler | low | Critical |
2,598,097,364 | pytorch | [export] Torch custom class export issue | ### 🐛 Describe the bug
We are trying to implement the functionality to export custom classes within Torch-TensorRT. TensorRT engines are expressed as custom classes (https://github.com/pytorch/TensorRT/blob/re-export/core/runtime/register_jit_hooks.cpp#L77-L83)
Following https://docs.google.com/document/d/18fBMPuOJ0fY5ZQ6YyrHUppw9FA332CpNtgB6SOIgyuA/edit
1) I implement the flatten call here : https://github.com/pytorch/TensorRT/blob/re-export/core/runtime/TRTEngine.cpp#L321-L336
2) I register a fake class with the unflatten call : https://github.com/pytorch/TensorRT/blob/re-export/py/torch_tensorrt/dynamo/runtime/register_fake_class.py#L16-L21
When I re-export the graph with these compiled TRT engines, I face an error as shown in this [reexport.log]
(https://github.com/user-attachments/files/17438975/reexport.log)
Error msg:
```py
Traceback (most recent call last):
File "/home/dperi/Downloads/TensorRT/test.py", line 29, in <module>
torch_tensorrt.save(trt_gm, "./trt.ep", inputs=inputs, retrace=True)
File "/home/dperi/Downloads/TensorRT/py/torch_tensorrt/_compile.py", line 534, in save
exp_program = torch.export.export(
^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/__init__.py", line 366, in export
return _export(
^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 1013, in wrapper
raise e
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 986, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/exported_program.py", line 116, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 1954, in _export
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 1744, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 642, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 1674, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1262, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1497, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 524, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 625, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 194, in inner
flat_f_outs = f(*flat_f_args)
^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 863, in functional_call
out = mod(*args[params_len:], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/_trace.py", line 1657, in forward
tree_out = torch.fx.Interpreter(self._export_root).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/interpreter.py", line 320, in call_module
return submod(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/Downloads/TensorRT/py/torch_tensorrt/_features.py", line 56, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/dperi/Downloads/TensorRT/py/torch_tensorrt/dynamo/runtime/_TorchTensorRTModule.py", line 275, in forward
outputs: List[torch.Tensor] = torch.ops.tensorrt.execute_engine(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_ops.py", line 1122, in __call__
return _call_overload_packet_from_python(self, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_ops.py", line 1134, in _call_overload_packet_from_python
torch_function_called, ret = torch._C._maybe_call_torch_function_for_op_packet(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/non_strict_utils.py", line 551, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_ops.py", line 1122, in __call__
return _call_overload_packet_from_python(self, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_ops.py", line 1167, in _call_overload_packet_from_python
raise RuntimeError(err_msg)
RuntimeError: Fail to match any TorchBindOverload of tensorrt.execute_engine with following exceptions:
Overload name default:
tensorrt::execute_engine() Expected a value of type '__torch__.torch.classes.tensorrt.Engine (of Python compilation unit at: 0)' for argument '_1' but instead found type 'FakeScriptObject'.
Position: 1
Value: Torch-TensorRT TensorRT Engine:
Name: _run_on_acc_0_engine
Inputs: [
id: 0
name: x
shape: [1, 10]
dtype: Float
]
Outputs: [
id: 0
name: output0
shape: [1, 40]
dtype: Float
]
Device: Device(ID: 0, Name: NVIDIA GeForce RTX 3080 Ti, SM Capability: 8.6, Type: GPU)
Hardware Compatibility: Disabled
Target Platform: linux_x86_64
Declaration: tensorrt::execute_engine(Tensor[] _0, __torch__.torch.classes.tensorrt.Engine _1) -> Tensor[] _0
Cast error details: _1 is expected to be a FakeScriptObject of __torch__.torch.classes.tensorrt.Engine
```
During the re-export, this forward function gets called https://github.com/pytorch/TensorRT/blob/re-export/py/torch_tensorrt/dynamo/runtime/_TorchTensorRTModule.py#L275 and the self.engine object here is a FakeScriptObject
```py
(Pdb) self.engine
<torch._library.fake_class_registry.FakeScriptObject object at 0x7557289bf010>
(Pdb) self.engine.real_obj
Torch-TensorRT TensorRT Engine:
Name: _run_on_acc_0_engine
Inputs: [
id: 0
name: x
shape: [1, 10]
dtype: Float
]
Outputs: [
id: 0
name: output0
shape: [1, 40]
dtype: Float
]
Device: Device(ID: 0, Name: NVIDIA GeForce RTX 3080 Ti, SM Capability: 8.6, Type: GPU)
Hardware Compatibility: Disabled
Target Platform: linux_x86_64
(Pdb) input_tensors
[FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 10)),
device='cuda:0'))]
```
Any suggestions on how to resolve the issue in the log ?
cc: @angelayi
### Versions
[pip3] pytorch-lightning==2.0.7
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] pytorch-triton-rocm==2.2.0
[pip3] pytorchvideo==0.1.5
[pip3] torch==2.6.0.dev20241011+cu124
[pip3] torch_tensorrt==2.6.0.dev0+cb03ca14a
[pip3] torchmetrics==1.4.0.post0
[pip3] torchprofile==0.0.4
[pip3] torchsurgeon==0.1.2
[pip3] torchvision==0.20.0.dev20241011+cu124
[pip3] triton==3.0.0
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,598,116,810 | kubernetes | [FG:InPlacePodVerticalScaling] Improve allocated resources checkpointing | There is some low-hanging fruit for improving the checkpointing of allocated resources:
1. When the pod allocation is updated, the Kubelet always calls set allocation at the pod level, but the status manager sets it for each container, calling through to the checkpoint state. This causes the checkpoint file to be written in quick succession for each container in the pod. It would be better to just update the checkpoint once per-pod.
- set pod allocation for each container https://github.com/kubernetes/kubernetes/blob/e8b59afec65762ec33c683f376d5a1e5959476c2/pkg/kubelet/status/status_manager.go#L254
- write the checkpoint for each container: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/status/state/state_checkpoint.go#L138
2. Failure to write the checkpoint should be logged, but not block the resize from proceeding with the in-memory cache. The checkpoint is only used on kubelet restart, and the failure state for that looks the same whether or not syncing the pod proceeds. Better to treat the checkpointing as best effort.
/kind feature
/sig node
/priority important-longterm | sig/node,kind/feature,priority/important-longterm,triage/accepted | low | Critical |
2,598,135,446 | TypeScript | TypeError: Cannot read properties of undefined (reading 'length') at elaborateDidYouMeanToCallOrConstruct | ### 🔎 Search Terms
undefined length elaborateDidYouMeanToCallOrConstruct
### 🕗 Version & Regression Information
- This changed between versions 5.5.4 and 5.6.2
- I was unable to test this on the latest version because `npm i typescript@next` never completed, spamming the terminal with the following error for about 20 minutes before I killed it:
```
npm warn ERESOLVE overriding peer dependency
npm warn While resolving: coracle@0.4.14
npm warn Found: typescript@5.7.0-dev.20241018
npm warn node_modules/typescript
npm warn typescript@"5.7.0-dev.20241018" from the root project
npm warn
npm warn Could not resolve dependency:
npm warn peerOptional typescript@">=5.0.0" from nostr-tools@2.8.1
npm warn node_modules/nostr-tools
npm warn nostr-tools@"^2.8.1" from the root project
npm warn 4 more (@welshman/content, @welshman/dvm, @welshman/signer, @welshman/util)
```
### ⏯ Playground Link
_No response_
### 💻 Code
I'm not able to isolate the problem because the compiler is crashing without any information about my source files. You can clone the project from https://github.com/coracle-social/coracle and reproduce simply by running `git checkout 5e70d4ed499b6721f41318644b6250713085c923; npm i; npx tsc`.
### 🙁 Actual behavior
Running `npx tsc` in my project gives me:
```
/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:120987
throw e;
^
TypeError: Cannot read properties of undefined (reading 'length')
at elaborateDidYouMeanToCallOrConstruct (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:62406:62)
at elaborateError (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:62360:10)
at checkTypeRelatedToAndOptionallyElaborate (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:62344:24)
at getSignatureApplicabilityError (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:74383:14)
at resolveCall (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:74779:25)
at resolveCallExpression (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:75194:12)
at resolveSignature (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:75587:16)
at getResolvedSignature (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:75613:18)
at checkCallExpression (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:75724:23)
at checkExpressionWorker (/Users/me/nostr/coracle/node_modules/typescript/lib/tsc.js:79133:16)
Node.js v20.18.0
```
On my Debian 11 VPS, I get a similar error:
```
/home/me/node_modules/typescript/lib/tsc.js:120986
throw e;
^
TypeError: Cannot read properties of undefined (reading 'length')
at elaborateDidYouMeanToCallOrConstruct (/home/me/node_modules/typescript/lib/tsc.js:62405:62)
at elaborateError (/home/me/node_modules/typescript/lib/tsc.js:62360:10)
at checkTypeRelatedToAndOptionallyElaborate (/home/me/node_modules/typescript/lib/tsc.js:62344:24)
at getSignatureApplicabilityError (/home/me/node_modules/typescript/lib/tsc.js:74382:14)
at resolveCall (/home/me/node_modules/typescript/lib/tsc.js:74778:25)
at resolveCallExpression (/home/me/node_modules/typescript/lib/tsc.js:75193:12)
at resolveSignature (/home/me/node_modules/typescript/lib/tsc.js:75586:16)
at getResolvedSignature (/home/me/node_modules/typescript/lib/tsc.js:75612:18)
at checkCallExpression (/home/me/node_modules/typescript/lib/tsc.js:75723:23)
at checkExpressionWorker (/home/me/node_modules/typescript/lib/tsc.js:79132:16)
Node.js v18.20.4
```
### 🙂 Expected behavior
I expect typescript to not crash.
### Additional information about the issue
I completely re-installed nvm/npm/node, removed all node_modules folders on my system, removed the project director and re-cloned, just to make sure my environment wasn't dirty somehow. | Needs More Info | low | Critical |
2,598,139,450 | pytorch | xpu: clarify which Intel GPUs are supported by PyTorch 2.5 | Can it, please, be clarified which Intel GPUs are supported by XPU backend in PyTorch 2.5? I am reviewing 2 dedicated articles:
* https://pytorch.org/docs/2.5/notes/get_start_xpu.html on PyTorch side has a note that "Intel® Data Center GPU Max Series" is supported for Linux and "Intel Client GPUs" are supported on Linux/Windows. The first one is clear and matches [Intel® Data Center GPU Max Series](https://ark.intel.com/content/www/us/en/ark/products/series/232874/intel-data-center-gpu-max-series.html). The second one (Intel Client GPUs) is ambiguous and might mean few different generations of Intel GPUs. So, further clarification is needed.
* https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html - this document is on Intel side and provides further clarification on prerequisite setup instructions and supported hardware. Here, support of Max Series is also clear. However, for Intel Client GPUs it raises some questions.
I will list my questions on the 2nd document below. But first I will just post my take of from this document on supported Intel GPUs for PyTorch 2.5 - **please, clarify is this correct or not (and document needs polishing)**?
| GPU | OS | Notes |
| --- | --- | --- |
| [Intel® Data Center GPU Max Series](https://ark.intel.com/content/www/us/en/ark/products/series/232874/intel-data-center-gpu-max-series.html) | Linux (OSes as described on dgpu-docs.intel.com) | Discrete Server GPU card |
| [Products formerly Alchemist](https://ark.intel.com/content/www/us/en/ark/products/codename/226095/products-formerly-alchemist.html) | Ubuntu 22.04, Windows, WSL2 | Discrete Client GPU cards, [Intel® Arc™ A-Series Graphics](https://ark.intel.com/content/www/us/en/ark/products/series/227957/intel-arc-a-series-graphics.html) |
| [Products formerly Meteor Lake](https://ark.intel.com/content/www/us/en/ark/products/codename/90353/products-formerly-meteor-lake.html) | Ubuntu 22.04, Windows, WSL2 | Integrated graphics |
| [Products formerly Lunar Lake](https://ark.intel.com/content/www/us/en/ark/products/codename/213792/products-formerly-lunar-lake.html) | **not supported** | Integrated graphics |
| [Products formerly Arrow Lake](https://ark.intel.com/content/www/us/en/ark/products/codename/225837/products-formerly-arrow-lake.html) | **not supported** | Integrated graphics |
Here are questions/concerns I have on the document:
* The document was updated few times during last few days without access to version history. Frequent updates are understandable since it's being prepared to PyTorch 2.5, but I am pretty sure that in one of the versions it did talk about support for Lunar Lake, but now I don't see this GPU mentioned.
* Document does not have links to Intel GPU descriptions, for example on ark.intel.com. It also uses names for the GPUs which are ambiguous and don't match descriptions on ark.intel.com and other intel registry materials. It also uses lower level names such as DG2 which is typically used in driver sources and might be known to limited group of people. Here is a snapshot from the current version of the document:
| OS Version | Supported Client GPUs |
| -- | -- |
| Ubuntu 22.04, WSL2 (Ubuntu 22.04) | Intel® Arc™ Graphics family (Codename DG2) |
| Ubuntu 24.04, WSL2 (Ubuntu 24.04) | Intel® Core™ Ultra processor family with Intel® Graphics (Codename Meteor Lake)<br>Intel® Arc™ Graphics family (Codename DG2) |
| Windows 11 Family, Windows 10 (22H2) | Intel® Core™ Ultra processor family with Intel® Graphics (Codename Meteor Lake)<br>Intel® Arc™ Graphics family (Codename DG2) |
* There are also question looking in how supported platforms are programmed to build SYCL eager mode kernels:
* [Here](https://github.com/intel/torch-xpu-ops/blob/7e3d00acea9f0d3728048a5b2743de20d55c64ba/cmake/BuildFlags.cmake#L122) for Linux: `pvc,xe-lpg,ats-m150`
* [Here](https://github.com/intel/torch-xpu-ops/blob/7e3d00acea9f0d3728048a5b2743de20d55c64ba/cmake/BuildFlags.cmake#L120) for Windows `ats-m150,lnl-m,mtl-u,mtl-h`
* `ats-m150` means 170 versions of [Intel® Data Center GPU Flex Series](https://ark.intel.com/content/www/us/en/ark/products/series/230021/intel-data-center-gpu-flex-series.html) - it's not on the supported list. And DG2 is not on the prebuild kernels list. Likely due to architecture similarity `ats-m150` will allow DG2 to work, but are there any consequences?
* `xe-lpg` means some GPU family. Likely Meteor Lake is in it, but is there any other GPUs included? if yes - which?
* `lnl-m` suggests that Lunar Lake might be supported at least on Windows. Not sure about Linux - see prev. question, what's `xe-lpg`?
* Arrow Lake does not seem to be on the supported list unless also included in `xe-lpg`
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @vlad-penkin
cc @svekars @brycebortree @sekyondaMeta @gujinghui @EikanWang @fengyuan14 @guangyey | module: docs,triaged,module: xpu | low | Major |
2,598,144,671 | transformers | Add support for Janus model from DeepSeek AI | ### Model description
Janus is an autoregressive framework that unifies multimodal understanding and generation. Unlike previous approaches that use a single visual encoder for both tasks, Janus decouples visual encoding into separate pathways while utilizing a unified transformer architecture for processing. This decoupling addresses the conflict between visual encoder roles in understanding and generation, enhancing flexibility and performance.
Key features:
- Unified framework for multimodal understanding and generation
- Decoupled visual encoding pathways
- Single, unified transformer architecture for processing
- Improved performance in multimodal understanding tasks
- Flexibility to select optimal encoding methods for each component
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The Janus model is developed by DeepSeek AI. Here are the relevant links for implementation:
Paper: [Janus: Bridging the Gap Between Multimodal Understanding and Generation](https://arxiv.org/pdf/2410.13848)
GitHub repository: [deepseek-ai/Janus](https://github.com/deepseek-ai/Janus) | New model | low | Major |
2,598,144,759 | flutter | Audit GN flags that are not used in any checked-in CI configuration | Forked from https://github.com/flutter/flutter/issues/156909.
Today, we have _many_ GN flags across the project. Some are load-bearing, some are debug only, some are likely unused.
We should audit what flags are invoked by CI configurations checked into [`engine/ci/builders`](https://github.com/flutter/engine/tree/main/ci/builders). @zanderso mentioned that [`engine_build_configs/bin/check.dart`](https://github.com/flutter/engine/blob/main/tools/pkg/engine_build_configs/bin/check.dart) might be a starting point (not where the code would live, but as an example) for building such a diagnostic tool.
I'd also be supportive as adding it as a (hidden by default) `et audit` command FWIW. | P3,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Critical |
2,598,165,978 | godot | Node that isn't a tool but was created by tool in editor runs _Process until restart | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
dotnet --info
```
.NET SDK:
Version: 8.0.402
Commit: 70aa751718
Workload version: 8.0.400-manifests.b6724b7a
MSBuild version: 17.11.4+37eb419ad
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22631
OS Platform: Windows
RID: win-x64
Base Path: c:\program files\dotnet\sdk\8.0.402\
.NET workloads installed:
Configured to use loose manifests when installing new manifests.
There are no installed workloads to display.
Host:
Version: 8.0.8
Architecture: x64
Commit: 08338fcaa5
.NET SDKs installed:
8.0.402 [c:\program files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 8.0.8 [c:\program files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.15 [c:\program files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.11 [c:\program files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.2 [c:\program files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.8 [c:\program files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.15 [c:\program files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 6.0.11 [c:\program files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.2 [c:\program files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.8 [c:\program files\dotnet\shared\Microsoft.WindowsDesktop.App]
Other architectures found:
x86 [C:\Program Files (x86)\dotnet]
registered at [HKLM\SOFTWARE\dotnet\Setup\InstalledVersions\x86\InstallLocation]
Environment variables:
Not set
global.json file:
Not found
Learn more:
https://aka.ms/dotnet/info
Download .NET:
https://aka.ms/dotnet/download
```
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6094) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description
When creating a node from a C# tool script, even if the created node is not a tool, it runs its _process function as if it was until the editor is restarted
I may be just misunderstanding how tools work, but I thought that nodes that are not tools should never be able run code in the editor.
### Steps to reproduce
1. Create blank scene
2. Create Tool:
```cs
using Godot;
using System;
[GlobalClass, Tool]
public partial class ThisIsATool : Node3D
{
[Export] public bool Toggle {
get => _toggle;
set {
_toggle = value;
CreateBrokenNode();
}
}
private bool _toggle = false;
// Called when the node enters the scene tree for the first time.
public override void _Ready()
{
}
private void CreateBrokenNode()
{
var root = EditorInterface.Singleton.GetEditedSceneRoot();
var node = new ThisIsNotATool();
AddSibling(node);
node.SetOwner(root);
}
// Called every frame. 'delta' is the elapsed time since the previous frame.
public override void _Process(double delta)
{
}
}
```
3. Create a type that isn't a tool
```cs
using Godot;
using System;
[GlobalClass]
public partial class ThisIsNotATool : Node
{
// Called when the node enters the scene tree for the first time.
public override void _Ready()
{
}
// Called every frame. 'delta' is the elapsed time since the previous frame.
public override void _Process(double delta)
{
GD.Print("This is not a tool");
}
}
```
4. Hitting the toggle will cause node to be created and "This is not a tool" to be spammed in editor.
5. Restart editor, problem goes away.
### Minimal reproduction project (MRP)
[brokentool.zip](https://github.com/user-attachments/files/17439481/brokentool.zip)
| bug,topic:editor,topic:dotnet | low | Critical |
2,598,184,503 | pytorch | Empty Dimensions returned in `aten::_trilinear` | ### 🐛 Describe the bug
I'm working on a lowering for `aten::_trilinear` in the torch-mlir repo, and believe that I have found a discrepancy in the output when a dimension lies in the triple intersection between `expand1`, `expand2`, and `expand3` operands.
As specified in the comment of `aten::_trilinear` at `aten/src/ATen/native/Linear.cpp`:
```text
// _trilinear computes a trilinear einstein sum with an unrolled dimension
// the result is `(i1.unsqueeze(expand1)*i2.unsqueeze(expand2)*i2.unsqueeze(expand3)).sum(sumdim)`
// the computation is unrolled in the unroll_dim dimension
// its main purpose is to unify the computations in bilinear and bilinear_backward
```
In most cases, `torch.ops.aten._trilinear` corresponds one-to-one with the output of `torch.einsum`, which has the same definition of `(i1.unsqueeze(expand1)*i2.unsqueeze(expand2)*i2.unsqueeze(expand3)).sum(sumdim)`, when give proper operands:
For example:
```python
>>> a, b, c = [torch.rand(2, 3, 4) for _ in range(3)]
>>> torch._trilinear(a, b, c, [], [], [], []) == torch.einsum('abc,abc,abc->abc', a, b, c)
tensor([[[True, True, True, True],
[True, True, True, True],
[True, True, True, True]],
[[True, True, True, True],
[True, True, True, True],
[True, True, True, True]]])
>>> torch._trilinear(a, b, c, [], [], [], [0, 2]) == torch.einsum('abc,abc,abc->b', a, b, c)
tensor([True, True, True])
```
However, in cases where there is a dimension shared between all `expand` lists that is not included in `sumdim`, _trilinear appears to output an empty tensor, with the intersecting dimension set to zero, while einsum outputs a tensor with that dimension set to 1:
```python
# Trilinear
>>> result_trilinear = torch._trilinear(a, b, c, [0], [0], [0], [2])
>>> result_trilinear.shape
torch.Size([0, 2, 4])
# Equivalent Einsum
>>> a_unsqueezed, b_unsqueezed, c_unsqueezed = a.unsqueeze(0), b.unsqueeze(0), c.unsqueeze(0)
>>> result_einsum = torch.einsum('abcd,abcd,abcd->abd', a_unsqueezed, b_unsqueezed, c_unsqueezed)
>>> result_einsum.shape
torch.Size([1, 2, 4])
```
This occurs anytime that a dimension is included in expand1, expand2, and expand3, but not included in sumdim:
```python
>>> result = torch._trilinear(a, b, c, [0, 1], [0, 1], [0, 1], [2])
>>> result.shape
torch.Size([0, 0, 3, 4])
```
## Expected Result
I expected that torch._trilinear would have returned a tensor with those dimensions set to 1 instead of zero, the same way that torch.einsum returns a 1 for those dimensions.
### Versions
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] onnx==1.16.1
[pip3] torch==2.6.0.dev20240916+cpu
[pip3] torchvision==0.20.0.dev20240916+cpu
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,module: linear algebra | low | Critical |
2,598,184,659 | TypeScript | Cannot reassign undefined to the parameter of a function parameter destructuring with default value | ### 🔎 Search Terms
"function parameter destructuring", "destructured parameter", "named parameter", "undefined"
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about "Why Method Bivariance?"
### ⏯ Playground Link
https://www.typescriptlang.org/play/?target=0&ts=5.7.0-dev.20241018#code/GYVwdgxgLglg9mABGOAnAtgQwDYAVOqboCmUxqAYuNPGABQEDmAXIgEZxzbGZIA+icABNiwGGGJDEAXkRRUIYgEpEAbwBQiLYiaI+fWcNHjJAbnUBfdetCRYCRCIDO8kNBCpJ+QiTKVq9vSqOqiMMnIKxIgWrMFMrBxcPPyCYCJiElIWKhoAkLr6hmnGmeZWNgG0jsQuCu6eQgDyAA6BON5EpORUdrR0caHhrlExaiGMAPwJnNy80Tnq+YOFqekmQmXWtjQOzq71XgSdfj07YACM-eNDkTpO7DPJeqslktGx49NJcwJGGW-ZNSLAoGF7-DaWLaVXY1fZQDyHHxdfy9BAAJiuulGAxYD2+KT+62i4RxrGG8yBSzCK0JpUhQA
### 💻 Code
```typescript
function normalParameterFunction(arg: boolean | undefined = true) {
arg ||= undefined;
}
function destructuredParameterFunction({ arg = true }: { arg: boolean | undefined }) {
arg ||= undefined;
}
function destructuredOptionalParameterFunction({ arg = true }: { arg?: boolean }) {
arg ||= undefined;
}
```
### 🙁 Actual behavior
The `normalParameterFunction` works fine, but in `destructuredParameterFunction` and `destructuredOptionalParameterFunction`, I cannot assign `undefined` to `arg`.
> Type 'undefined' is not assignable to type 'boolean'.
### 🙂 Expected behavior
I know that if I specify the default value to the `arg`, the `arg` will no longer be `undefined`, but maybe I can reassign it to the `undefined`.
I guess it should have the same logic as the following:
```typescript
var arg: boolean | undefined = true;
arg ||= undefined;
```
### Additional information about the issue
* I don't know if it's really a bug or if it's a bad habit to change the value of a function parameter, but the function without parameter destructuring work as expected. Also, I hate recreate a new variable and come up with a new name in order to change the value of a function parameter, even though assigning the parameter itself directly doesn't modify the variable outside the function.
* The above code is a real-world use case that aims to set `arg` to `undefined` when it is a falsy value. In fact, `arg ||= undefined` is not needed, just `arg = undefined` will trigger the issue, but in this case, the original value of `arg` is ignored and has no practical meaning.
* You can also pass the type check in disguise with the following code:
```typescript
function destructuredParameterFunction1({ arg = true as boolean | undefined }: { arg: boolean | undefined }) {
arg ||= undefined;
}
function destructuredParameterFunction2({ arg }: { arg: boolean | undefined } = { arg: true }) {
arg ||= undefined;
}
``` | Suggestion,Awaiting More Feedback | low | Critical |
2,598,194,485 | pytorch | tlparse's fx_graph_runnable should include custom op info | From talking with @eellison and @ezyang separately.
## Motivation
The fx_graph_runnable contains the graph but may have custom operators that need us to load libraries for. It would be nice if we 1) put the torch.ops.load_library(blah) calls into the fx_graph_runnable
2) also outputted a build target that includes the dependencies and make it so that we can easily run the fx_graph_runnable.
## Pitch
For (1): torch.ops.load_library can record which custom ops got loaded, so we know which torch.ops.load_library calls to ultimately generate.
For (2): We need some help from the environment to give us a build target we can use?
cc @ezyang @chauhang @penguinwu @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,oncall: fx,module: pt2-dispatcher | low | Minor |
2,598,202,906 | rust | rustdoc: Footnotes don't work well when used on multiple doc comments that share page. | ```rust
pub struct Foo;
impl Foo {
/// Link 1 [^1]
///
/// [^1]: Hiya
pub fn l1(){}
/// Link 2 [^2]
///
/// [^2]: Biya
pub fn l2() {}
}
```

Not only is this ugly, but both footnotes use the same link/number, despite being seperate in the source code.
I think the right thing to do here is to show all footnotes for a page in one place (at the bottom). This may have some subtle interactions with when markdown content appears on multiple pages (eg summaries, trait methods), but I've not looked into the details yet.
Origionly inspired by this output:
[](https://docs.rs/rustdoc-types/0.32.0/rustdoc_types/struct.Trait.html#structfield.is_dyn_compatible) | T-rustdoc,C-enhancement,A-rustdoc-ui,T-rustdoc-frontend | low | Minor |
2,598,213,444 | flutter | [camera] Failing integration tests for `camera_android_camerax` due to `video_player` | A couple of tests for `camera_android_camerax` are failing with this error:
```
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: Playback error
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: androidx.media3.exoplayer.ExoPlaybackException: Source error
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at androidx.media3.exoplayer.ExoPlayerImplInternal.handleIoException(ExoPlayerImplInternal.java:736)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at androidx.media3.exoplayer.ExoPlayerImplInternal.handleMessage(ExoPlayerImplInternal.java:706)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at android.os.Handler.dispatchMessage(Handler.java:102)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at android.os.Looper.loopOnce(Looper.java:201)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at android.os.Looper.loop(Looper.java:288)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at android.os.HandlerThread.run(HandlerThread.java:67)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: Caused by: androidx.media3.exoplayer.source.UnrecognizedInputFormatException: None of the available extractors (FlvExtractor, FlacExtractor, WavExtractor, FragmentedMp4Extractor, Mp4Extractor, AmrExtractor, PsExtractor, OggExtractor, TsExtractor, MatroskaExtractor, AdtsExtractor, Ac3Extractor, Ac4Extractor, Mp3Extractor, AviExtractor, JpegExtractor, PngExtractor, WebpExtractor, BmpExtractor, HeifExtractor, AvifExtractor) could read the stream.{contentIsMalformed=false, dataType=1}
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at androidx.media3.exoplayer.source.BundledExtractorsAdapter.init(BundledExtractorsAdapter.java:108)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at androidx.media3.exoplayer.source.ProgressiveMediaPeriod$ExtractingLoadable.load(ProgressiveMediaPeriod.java:1060)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at androidx.media3.exoplayer.upstream.Loader$LoadTask.run(Loader.java:421)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:637)
10-17 13:57:17.100 25044 30854 E ExoPlayerImplInternal: at java.lang.Thread.run(Thread.java:1012)
10-17 13:57:17.110 25044 25044 I flutter : â•â•â•¡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•
10-17 13:57:17.110 25044 25044 I flutter : The following PlatformException was thrown running a test:
10-17 13:57:17.110 25044 25044 I flutter : PlatformException(VideoError, Video player had error androidx.media3.exoplayer.ExoPlaybackException:
10-17 13:57:17.110 25044 25044 I flutter : Source error, null, null)
10-17 13:57:17.110 25044 25044 I flutter :
10-17 13:57:17.110 25044 25044 I flutter : When the exception was thrown, this was the stack
10-17 13:57:17.110 25044 25044 I flutter :
10-17 13:57:17.110 25044 25044 I flutter : The test description was:
10-17 13:57:17.110 25044 25044 I flutter : Pause and resume video recording
10-17 13:57:17.110 25044 25044 I flutter : â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•â•
10-17 13:57:17.110 25044 25044 I flutter : 00:18 +5 -1: Pause and resume video recording [E]
10-17 13:57:17.111 25044 25044 I flutter : Test failed. See exception logs above.
10-17 13:57:17.111 25044 25044 I flutter : The test description was: Pause and resume video recording
10-17 13:57:17.111 25044 25044 I flutter :
10-17 13:57:17.111 25044 25044 I flutter : 00:18 +5 -2: (tearDownAll)
```
Failing Roll: https://github.com/flutter/packages/pull/7888
[Full Logs](https://ff68c23fd57da5880ae7632c3bde42c51e126492342cc226f260403-apidata.googleusercontent.com/download/storage/v1/b/flutter_firebase_testlab_staging/o/plugins_android_test%2Fpackages%2Fcamera%2Fcamera_android_camerax%2Funknown_build%2Fcf0cfa00-3d32-42eb-8cfe-d24673e344ce%2Fexample%2F0%2Fpanther-33-en-portrait%2Flogcat?jk=AdvOr8v5Xa_9MQIWP25I-tWyfHLUnGBTH-QRxEOHnLySFnL4ixIyVvv4jgHeQMWhJpiLL3li8gSRls9BgAEcL1l9jElw_-koY7zx3tHhNI_r47_oyFd0itVarxeUhny8PhOJQSJoJzWXG90SaUrQczcSAyDA3jlzjSXk7d3A8PKHV6MH5BZ8FuV6jtJyeoRyrQNVs9BNhPRFo54N33nD86uKVZPMOhaUO0nYRDv5AyqnPwrsk2aYQeY5ZvtCBo0cvIVakcrMQs7_aLhna06qTgOjm79TjGEK-sC5tMxsD1zJKkgiS7WQOixG_OPRjC-fOSdf0HXSTq9YqHfEEucMzn3L2Ibd9kaLmOEQBEqvJyh6joRRAFKM23lpdQLRp-R1KfRkLgcY-WK8olqvHttd53V_HcgR_MAWXVBtZmTCPU3btd2QV4VR8nckoA23iUGZww0ApMXleJ19AlUmpByyHpahexu5e8l8yH96jgsjr2x4buezqRa-yAx-qLrmygzYid0B0IkQLxO2AW1GRi0H5m2JMEPnxrctCxZTpC00oaR1_12J4aRLQfujj4B5WHRCndKJzj33kz1bnOzl7YdRZVkBWmI7K6sMn91-CWxe22B8qspMfqIEv-LNUStjwb0k0PeUEh9Z5HDOrR6mny3yA6bcmSULA-JdUvxU_degv8KI-g_oa28cH704PDy6kyld7el-tFBl0Xmt2bUfQ0e2hn7xnWQ7vpnDryFYpHS-fofROMZDT6gZCmypg1wJxFNO7bvdhTrY8Lt97TU12ZPeF4w0TA0ReFMZxNRu92S6grB0Ck2InRIyQQHCsie_Y39Bki15pTR2dqlJNLuKukF_dp7Dqdc8sBa43dmfbNUIyHYLGzjiJh6JLAbq_dB_xFe20yupwXSzlBwQ6uo-g72lz4oZEOoK6ns9AKToS4Ci_BZwKkc-inHMvIahfkOwG1QHixe6SX5UEgr24UmvRw6BHyyZti4BaK7N-DGd9cFCRy_hPJz39FC2Ct34O9JfC4kLNI-zBqtuAqA1qvglmQQUUJFeP5SAWdv6wAzbUTmWtcj7M2-gaNUyt06HatmLOs_QYZByYKqiBg5_nXUxUv5Jo8wUYKkgzBImbQ9cMOdUk6K_52BpZaP7nca-nZUaRQ1kBHsuMYyxa5LrsahfZ_ozvtbu48QvpxncZNt-Rsf5LWaaGJQzMUThjXRbQSsdx3BWglSqGMytAMuKEqLW9p4P_uqB4wh3nEIJkKEYxtZIzGcvvdFU_ImXJ_pKja5S_Hc-XTFQ6B6R1AveJdlgNzrDyzQqBtZyr6QpJ7PMGqkq4MvSfHKTyvXvqWaoAkdyuNGxoeVfNGYfDmVbyMrkG4VpPsfFGq8_&isca=1) | platform-android,p: camera,p: video_player,package,team-android,fyi-ecosystem | low | Critical |
2,598,220,850 | ollama | Migrate off centos 7 for intermediate build layers in container image builds | # What
[Centos is dead](https://endoflife.date/centos), long live [centos stream (9)](https://endoflife.date/centos-stream)
Ollama should probably not be using centos 7 now that it is unsupported and at EOL.
# Why
AMD and Nvidia are no longer publishing updates to their centos 7 flavor of dependencies.
See also https://rocm.docs.amd.com/en/docs-6.2.0/about/release-notes.html
> ROCm 6.2.0 marks the end of support (EoS) for:
> ...
> CentOS 7.9
See also https://docs.nvidia.com/cuda/cuda-installation-guide-linux/ not listing centos anywhere.
See also last image Nvidia published for centos 7 was ~6 months ago: https://hub.docker.com/r/nvidia/cuda/tags?name=centos
# More info
Currently there are several intermediate build layers in the container image build which utilize centos 7:
- [cuda-11-build-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L15)
- [cuda-12-build-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L32)
- [rocm-build-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L85)
- [cpu-builder-amd64](https://github.com/ollama/ollama/blob/bf4018b9ecd56a5deff0c22ca2fba242a8f0101b/Dockerfile#L101)
- this one also has transitive layers which depend on it, `container-build-amd64`, `cpu-build-amd64`, `cpu_avx-build-amd64`, and `cpu_avx2-build-amd64`)
Looking at the various Nvidia and AMD docs, it seems like both support the latest EL 9 version, so I would probably try to migrate to EL9 (rockylinux 9 ) to get the latest compatible versions of core dependencies like gcc and also avoid needing to update for a long time (EL 9 EOL is several years away still).
As a quick POC, I was able to get the rocm build migrated to rocky8 with very low effort. This build performance tested the same as the current HEAD of ollama, though I did not run it through the full suite of unit tests.
| feature request,build | low | Major |
2,598,232,704 | pytorch | [ROCm] PyTorch Profiler Seg Fault on PyTorch Nightly | ### 🐛 Describe the bug
After running the following script trace profiler seg faulted on torch nightly:
```
import torch
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity
model = models.resnet18().cuda()
inputs = torch.randn(2000, 3, 224, 224).cuda()
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof:
with record_function("model_inference"):
model(inputs)
prof.export_chrome_trace("resnet18_profile.json")
```
I ran a trace using the following commands:
```
curl -LO https://get.perfetto.dev/trace_processor
chmod +x ./trace_processor
./trace_processor --httpd PATH_TO_TRACE.json
```
Error Message:
```
[778.440] processor_shell.cc:1699 Trace loaded: 458.28 MB in 15.58s (29.4 MB/s)
[778.441] httpd.cc:102 [HTTP] Starting RPC server on localhost:9001
[778.441] httpd.cc:107 [HTTP] This server can be used by reloading https://ui.perfetto.dev and clicking on YES on the "Trace Processor native acceleration" dialog or through the Python API (see https://perfetto.dev/docs/analysis/trace-processor#python-api).
[788.706] http_server.cc:83 [HTTP] New connection
[788.706] http_server.cc:83 [HTTP] New connection
[788.707] http_server.cc:231 [HTTP] GET / [body=0B, origin=""]
[788.870] http_server.cc:231 [HTTP] GET /favicon.ico [body=0B, origin=""]
[788.870] http_server.cc:90 [HTTP] Client disconnected
[825.672] http_server.cc:83 [HTTP] New connection
[825.672] http_server.cc:231 [HTTP] POST /status [body=0B, origin="https://ui.perfetto.dev"]
[831.085] http_server.cc:231 [HTTP] OPTIONS /status [body=0B, origin="https://ui.perfetto.dev"]
[831.086] http_server.cc:90 [HTTP] Client disconnected
[831.332] http_server.cc:83 [HTTP] New connection
[831.333] http_server.cc:231 [HTTP] POST /status [body=0B, origin="https://ui.perfetto.dev"]
[831.661] http_server.cc:83 [HTTP] New connection
[831.661] http_server.cc:231 [HTTP] GET /websocket [body=0B, origin="https://ui.perfetto.dev"]
Segmentation fault (core dumped)
```
cc: @hliuca
### Versions
Torch version:
pytorch-triton-rocm 3.1.0+cf34004b8a
torch 2.6.0.dev20241018+rocm6.2
torchaudio 2.5.0.dev20241018+rocm6.2
torchvision 0.20.0.dev20241018+rocm6.2
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | module: rocm,triaged,oncall: profiler | low | Critical |
2,598,236,330 | ant-design | Modal contains a focusable element that has no interactive content | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-hy2spp)
### Steps to reproduce
1.Open any modal dialog, either the basic one or the ones created with Modal.method()
2.Open the browser console and type document.activeElement
3.The active element is empty and not interactive
<div tabindex="0" style="width: 0px; height: 0px;… hidden; outline: none;" aria-hidden="true"></div>
### What is expected?
From MDN:
Focusable elements should have interactive semantics
If an element can be focused using the keyboard, then it should be interactive; that is, the user should be able to do something to it and produce a change of some kind (for example, activating a link or changing an option).
The modal should automatically focus on the close button, or the Ok button, and navigation with Tab or Shift Tab should send focus on interactive elements, not on empty and non-interactive ones. (Description re-used from old issue linked below)
### What is actually happening?
What is actually happening?
An empty and non-interactive element is being focused by default and/or by navigating with Tab or Shift Tab when modal is open
Environment Info
antd 5.1.6
React 18
System any system
Browser any browser
| Environment | Info |
| --- | --- |
| antd | 5.21.4 |
| React | 18 |
| System | any system |
| Browser | any browser |
---
This issue was reported before here: https://github.com/ant-design/ant-design/issues/40380 and closed by this fix here: https://github.com/react-component/dialog/pull/393. But the issue still persists in the latest antd version
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,598,240,107 | deno | Deno compile --icon not taken into account | Version: Deno 2.0.0
Platform: Windows
When specifying an icon (.ico 128x128px) for the exe using :
> deno compile --icon LogoName.ico main.ts
=> The build succeed but the default windows application icon is used.
I managed to reproduced the issue with a blank "Hello World" project.
Repro :
- deno init my_project
- deno compile --icon LogoName.ico main.ts
| bug,compile | low | Major |
2,598,248,040 | langchain | make sure that @tool decorator can be applied to a method (regression?) | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
```python
from langchain_core.tools import tool
class MyAPI:
def __init__(self):
self.base_url = 'https://en.wikipedia.org/wiki/'
@tool
def lookup(self, page) -> str:
"""Look up a page on wikipedia. Include just the name of the page. Rest of the URL will be filled in."""
url = self.base_url + item
return request.get(url)
api = MyAPI()
api.lookup.invoke('hello')
``` | Ɑ: core | medium | Major |
2,598,253,465 | ui | [bug]: shadcn cli - diff command not working | ### Describe the bug
Run: `npx shadcn@latest diff`
Produce: `No updates found.`
OBS: I have changes in the calendar component
### Affected component/components
All
### How to reproduce
Run: `npx shadcn@latest init`
```bash
Framework: Vite
Style: default
Base Color: Slate
CSS variables: No
```
Run: `npx shadcn@latest add calendar`
Change the calendar component
Run: `npx shadcn@latest diff`
Result: `No updates found.`
### System Info
```bash
Node: v22.10.0
npx: v10.8.2
shadcn: v2.1.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,598,256,414 | flutter | Add `et test --output=...` | Today, `et test` just runs the underlying test executable, and reports a binary pass (exit code 0) or fail (otherwise).
A failed test also has a link to a log file on disk in a `/tmp` directory.
This is a good initial start, but is not yet at the level where we can encourage broad use (or use it in on CI).
Let's add 2 modes initially:
- `errors` (**default**, write failure logs to stderr, and full logs to a file in `FLUTTER_TEST_DIR` or `/tmp` locally)
- `streamed` (write all logs to stderr _only_)
As-per discussions, we'll bypass doing any sort of machine parsing of test output at this time - this will let us more quickly ramp up moving test invocations from `run_tests.py` (https://github.com/flutter/flutter/issues/156243) to `et test`, and add additional test hosts (iOS, Android, etc).
See also: <https://bazel.build/reference/command-line-reference#flag--test_output> for future suggestions. | P2,team-engine,triaged-engine,e: engine-tool | low | Critical |
2,598,265,074 | yt-dlp | Add support for palestinefilminstitute.org. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
_No response_
### Example URLs
- Single video: https://www.palestinefilminstitute.org/en/aftermath
- Playlist: https://www.palestinefilminstitute.org/en/provoked-narratives
### Provide a description that is worded well enough to be understood
[Embed link](https://cdn.palestinefilminstitute.org/watch/8a53509aefaf4b34a0f82b15463390cd/)
A downloadable m3u8 URL can be extracted with [_The Stream Detector_](https://addons.mozilla.org/firefox/addon/hls-stream-detector):
https://cdn.palestinefilminstitute.org/share/hls.m3u8?token=8a53509aefaf4b34a0f82b15463390cd
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.palestinefilminstitute.org/en/aftermath']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds [fbc66e3ab] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-8.1-6.3.9600-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.3, ffprobe 2022-07-24-git-39a538f430-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.palestinefilminstitute.org/en/aftermath
[generic] aftermath: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] aftermath: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.palestinefilminstitute.org/en/aftermath
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1761, in __extract_info
File "yt_dlp\extractor\common.py", line 741, in extract
File "yt_dlp\extractor\generic.py", line 2533, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.palestinefilminstitute.org/en/aftermath
# Using the extracted m3u8 URL
[debug] Command-line config: ['-vU', '--ffmpeg-location', 'C:\\Program Files\\ffmpeg-master-latest-win64-gpl\\bin', 'https://cdn.palestinefilminstitute.org/share/hls.m3u8?token=8a53509aefaf4b34a0f82b15463390cd']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds [fbc66e3ab] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-8.1-6.3.9600-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-115155-ge4da96c6b2-20240509 (setts), ffprobe N-115155-ge4da96c6b2-20240509
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.10.16.232911 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://cdn.palestinefilminstitute.org/share/hls.m3u8?token=8a53509aefaf4b34a0f82b15463390cd
[generic] hls: Downloading webpage
[debug] Identified a direct video link
[generic] hls: Downloading m3u8 information
[generic] hls: Checking m3u8 live status
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] hls: Downloading 1 format(s): 4580
[debug] Invoking hlsnative downloader on "https://cdn2.palestinefilminstitute.org/share/hls-1080p.m3u8?token=8a53509aefaf4b34a0f82b15463390cd"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 438
[download] Destination: hls [hls].mp4
[debug] File locking is not supported. Proceeding without locking
[download] 100% of 736.03MiB in 00:06:29 at 1.89MiB/s
[debug] ffprobe command line: "C:\Program Files\ffmpeg-master-latest-win64-gpl\bin\ffprobe" -hide_banner -show_format -show_streams -print_format json "file:hls [hls].mp4"
[debug] ffmpeg command line: "C:\Program Files\ffmpeg-master-latest-win64-gpl\bin\ffprobe" -show_streams "file:hls [hls].mp4"
[FixupM3u8] Fixing MPEG-TS in MP4 container of "hls [hls].mp4"
[debug] ffmpeg command line: "C:\Program Files\ffmpeg-master-latest-win64-gpl\bin\ffmpeg" -y -loglevel repeat+info -i "file:hls [hls].mp4" -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart "file:hls [hls].temp.mp4"
```
| site-request | low | Critical |
2,598,279,765 | flutter | [pigeon] Make the inputs to pigeon non-flag arguments | I tried to generate multiple inputs at once with `--input pigeons/*.dart` without thinking about it, and it silently generated the first one but not the second, which is pretty bad. That shouldn't actually work as written, but it should have errored.
But really what we should do is make the input file(s) non-flag input, since they are often the only input, they are required, and that's an *extremely* common pattern for command line tools. And it allows shell globbing to Just Work.
We should keep `--input` as a hidden argument that passes through, so we don't break everyone's muscle memory. But we should update all the READMEs to not use `--input`. | package,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Critical |
2,598,286,525 | godot | Project-wide-search keyboard shortcut doesn't work if no script is open | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Mac OS 10.14.8
### Issue description
See title.
### Steps to reproduce
1. Open a project.
2. Press Command+Shift+F.
3. Observe that nothing happens.
4. Open the script tab and do the same, still nothing happens.
5. Open a script, and now it works.
### Minimal reproduction project (MRP)
N/A | discussion,topic:editor,usability | low | Minor |
2,598,292,335 | godot | ImageTexture.is_pixel_opaque documented but nonexistent | ### Tested versions
- Reproducible in 4.2 and 4.3 stable
### System information
Godot v4.3.stable - Debian GNU/Linux 10 (buster) 10 - X11 - GLES3 (Compatibility) - Radeon RX 580 Series (POLARIS10, DRM 3.27.0, 4.19.0-27-amd64, LLVM 7.0.1) - AMD Ryzen 5 3600X 6-Core Processor (12 Threads)
### Issue description
`Texture2D` has a documented function, `_is_pixel_opaque` that is nonexistent in GDScript when called on an `ImageTexture`. It should probably actually be `is_pixel_opaque` which is also nonexistent in GDScript. This appears to be a binding error. Reasoning follows.
The C++ `ImageTexture` source code implements `ImageTexture::is_pixel_opaque`, overriding virtual `Texture2D::is_pixel_opaque`, but the method doesn't seem to be bound in `::_bind_methods` for `Texture2D` nor `ImageTexture`. Both `_is_pixel_opaque` and `is_pixel_opaque` are "nonexistent" in base `ImageTexture` and cause errors when invoked from GDScript.
`Texture2D` has a virtual binding for `_is_pixel_opaque` in `Texture2D::_bind_methods` (scene/resources/texture.cpp). `ImageTexture` does not override this, but does implement `is_pixel_opaque` with no leading underscore. This function. `is_pixel_opaque`, is the declared virtual in `Texture2D` (scene/resources/texture.h).
`Sprite2D` calls `Texture2D::is_pixel_opaque` directly, bypassing the GDScript bindings.
My understanding of the GDScript bindings is limited, but it seems to me that `Texture2D` should have a virtual binding for `is_pixel_opaque` (with no leading underscore) and that will expose the function `ImageTexture::is_pixel_opaque` in GDScript. The virtual binding of `Texture2D::_is_pixel_opaque` should be removed/replaced, solving the issue. Based on the header files and use of the function elsewhere, this seems the intended configuration.
There is documentation impact in that the documentation has a leading underscore that seems incorrect based on the internal function usage. Probably no leading underscore is intended based on the actual implementation. I don't know if changing the binding will automagically update the documentation.
### Steps to reproduce
In GDScript, instantiate an `ImageTexture` in any way you like, perhaps by loading one from a file, then try call `_is_pixel_opaque` or `is_pixel_opaque` on it. Either one will fail with the message, "Invalid call. Nonexistent function 'is_pixel_opaque' in base 'ImageTexture'.
```
var image: Image = Image.load_from_file("filename.png")
var texture: ImageTexture = ImageTexture.create_from_image(image)
print("opaque at [5, 5]? ", texture._is_pixel_opaque(5, 5))
```
### Minimal reproduction project (MRP)
[pixel-opacity-test.zip](https://github.com/user-attachments/files/17440150/pixel-opacity-test.zip)
| discussion,topic:core | low | Critical |
2,598,294,315 | flutter | Refactor `WorkerPool`/`WorkerTask` system in `et` | There is an initial worker task/pool/reporter system implemented in `engine_tool`:
- https://github.com/flutter/engine/blob/main/tools/engine_tool/lib/src/worker_pool.dart
- https://github.com/flutter/engine/blob/main/tools/engine_tool/lib/src/proc_utils.dart
It is used by both `et test` and `et lint` to run executables and report on their progress:
- https://github.com/flutter/engine/blob/main/tools/engine_tool/lib/src/commands/test_command.dart
- https://github.com/flutter/engine/blob/main/tools/engine_tool/lib/src/commands/lint_command.dart
In theory, it could also power `et format` but does not:
- https://github.com/flutter/engine/blob/main/tools/engine_tool/lib/src/commands/format_command.dart
The overall design is good, but some tweaks will be needed to support more unified output, i.e. https://github.com/flutter/flutter/issues/157182.
Some notes:
- We'd want at least the option to do streaming reporting of task progress
- Let's separate `ProcessArtifacts` from the act of a task, and make it the responsibility of the runner/reporter
- Let's make sure this is CI compatible, i.e. friendly towards the `FLUTTER_LOGS_DIR` ddesign. | P3,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Minor |
2,598,319,118 | go | syscall: special case `cmd.exe /c <command>` in StartProcess | ## Background
It is well known that [os/exec](https://pkg.go.dev/os/exec) doesn't correctly escape nor quote `*.bat`, `*.cmd`, `cmd /c *` arguments. This is because in all three cases the escape/quote rules that apply are the ones defined in [cmd.exe docs](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/cmd), which are slightly different than the ones `os/exec` and [syscall.StartProcess](https://pkg.go.dev/syscall?GOOS=windows#StartProcess) follow, defined in [parsing-c-command-line-arguments](https://learn.microsoft.com/en-us/cpp/c-language/parsing-c-command-line-arguments?view=msvc-170). This discrepancy causes bugs like #1849, #17149, #68313, and can also lead to security vulnerabilities.
The only workaround that exists today to reliably execute `*.bat`, `*.cmd`, `cmd /c *` command using `os/exec` is to manually escape/quote the arguments at caller site and pass the resulting string to [syscall.SysProcAttr.CmdLine](https://pkg.go.dev/syscall?GOOS=windows#SysProcAttr). The problem is that having a robust implementation is complicated, so projects tend to have half-backed solutions, if any.
## Proposal
Special case `%COMSPEC% /c <command>` (%COMSPEC% usually points to `cmd.exe`) by applying the cmd.exe escape/quote rules to `<command>`. The exact rules are left for the implementer, as they are well documented.
Some considerations to take into account:
- `cmd.exe` has two types of quotation rules, let's call it default and special. We should follow the special rule (search for `/s` in the docs), as it is 100% predictable in comparison with the default rule, which has many limitations and can easily fallback to the special rule.
The special rule is simple: if `<command>` starts with a `"`, then the the leading and trailing quotes are stripped. This means that we should always surround `<command>` with quotes and pass `/s` before `/c`.
- `cmd.exe` allows passing multiple cmd-specific parameters before `/c` appears. The command is always what goes after `/c`. Therefore, `cmd.exe /d /c <command>` is valid and we should special-case it.
- `cmd.exe` also execute commands passed after the `/k` parameter. That is used to keep the command processor running after the command is executed, so it doesn't really fit well with the one-shot approach of `syscall.StartProcess`. We can ignore it.
## Why not special case also bat/cmd files
This proposal doesn't attempt to solve issues related to directly executing bat/cmd scripts for the following reasons:
- Windows [CreateProcess](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessw) API explicitly disallows passing the bat/cmd scripts in the application name and recommend to use `cmd /c` instead.
- It is not documented how to reliably detect bat/cmd files. Just using the extension seems brittle. In part this is why `CreateProcess` recommend to use `cmd /c`.
- Rust have been trying to reliably support bat/cmd scripts for more than three years, and the Rust library team recently tried to remove that support due to being difficult to implement correctly: https://github.com/rust-lang/rust/issues/123728.
- `os/exec` will now have a good workaround to execute bat files: `exec.Command("cmd.exe", "/c", "foo.bat", "arg 1")`
Note that I'm not putting this proposal in the proposal process because it is not adding new API nor breaking existing behavior. It is more as an umbrella issue to discuss the design and the implementation.
@golang/security @golang/windows | OS-Windows,NeedsDecision,compiler/runtime | low | Critical |
2,598,327,292 | terminal | When running Python code, user's input will appear in the incorrect place | ### Windows Terminal version
1.23.2913.0
### Windows build number
10.0.27729.1000
### Other Software
Visual Studio 2022 /w Python 3.9
### Steps to reproduce
1: Create a python script that accepts user input using Visual Studio 2022
2: Run the script and begin typing
### Expected Behavior
User input will appear in the correct place, such as the end of a line
### Actual Behavior
User input appears at the top left of the terminal window | Product-Conpty,Area-Input,Issue-Bug | low | Minor |
2,598,341,870 | rust | llvm noalias data gets lost when passing large structs | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
https://godbolt.org/z/b8as1TxdP
```rust
#[repr(C)]
pub struct Big<'a>(&'a mut u64, &'a mut u64, usize);
#[inline(never)]
pub unsafe fn bad(a: Big) -> u64 {
*a.0 = 0;
*a.1 = 1;
*a.0
}
#[repr(C)]
pub struct Good<'a>(&'a mut u64, &'a mut u64);
#[inline(never)]
pub unsafe fn good(a: Good) -> u64 {
*a.0 = 0;
*a.1 = 1;
*a.0
}
```
```asm
example::bad::h15eeefb1851d2a2e:
mov rax, qword ptr [rdi]
mov qword ptr [rax], 0
mov rcx, qword ptr [rdi + 8]
mov qword ptr [rcx], 1
mov rax, qword ptr [rax]
ret
example::good::h034657cbabb42f00:
mov qword ptr [rdi], 0
mov qword ptr [rsi], 1
xor eax, eax
ret
```
i expected the functions to emit similar codegen, but the bad one doesn't get the noalias on the references themselves because llvm thinks they're passed by pointer since the struct is large.
maybe [alias.scope](https://llvm.org/docs/LangRef.html#id2005) can help with this?
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (3ed6e3cc6 2024-10-17)
binary: rustc
commit-hash: 3ed6e3cc69857129c1d314daec00119ff47986ed
commit-date: 2024-10-17
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
Compiler returned: 0
``` | T-compiler,A-ABI,C-optimization | low | Critical |
2,598,344,095 | storybook | [Bug]: @storybook/addon-actions v7.6.* is missing @storybook/preview-api from dependencies | ### Describe the bug
It's only listed as a devDependency, but is required at runtime.
https://github.com/storybookjs/storybook/blob/d43ced4f6d1efc7d352f4dc9528d29dfe72dd336/code/addons/actions/package.json#L78
- https://github.com/storybookjs/storybook/blob/ac09301a97648ef72a21fbfeb6b5e2b2a8da81ef/code/addons/actions/src/runtime/action.ts#L2
- https://github.com/storybookjs/storybook/blob/ac09301a97648ef72a21fbfeb6b5e2b2a8da81ef/code/addons/actions/src/decorator.ts#L1
### Reproduction link
https://github.com/search?q=repo%3Astorybookjs%2Fstorybook+path%3A%2F%5Ecode%5C%2Faddons%5C%2Factions%5C%2Fsrc%5C%2F%2F+preview-api+language%3ATypeScript&type=code&l=TypeScript
### Reproduction steps
_No response_
### System
Not relevant
### Additional context
_No response_ | bug,core,won't fix | low | Critical |
2,598,389,865 | godot | SpriteFrames.get_frame_texture() return the whole spritesheet instead of one frame texture | ### Tested versions
- Reproductible in v4.3.stable.official [77dcf97d8], v4.2.2.stable.official [15073afe3], v4.2.1.stable.official [b09f793f5]
### System information
Windows 10 - Godot v4.2.2.stable.official [15073afe3] | Forward+
### Issue description
I used the "Add frames from sprite sheet" option in a SpriteFrames to add 4 frames, like this :

I expect `sp.get_frame_texture("default", 2)` to return only the frame 2 (the green "3") but it returns instead the whole spritesheet.

### Steps to reproduce
- Create a SpriteFrame
- Use a spritesheet to create frames
- Create a debug MeshInstance3D pointed by a camera
- Use a script to set the debug mesh texture to the result of `get_frame_texture()` at game start
- Check that the displayed texture is not a tile texture, but the whole spritesheet texture
### Minimal reproduction project (MRP)
[Proof that SpriteFrames is busted.zip](https://github.com/user-attachments/files/17440689/Proof.that.SpriteFrames.is.busted.zip)
| discussion,topic:rendering,documentation | low | Critical |
2,598,403,058 | godot | Detail mask only applies UV1 even when set to UV2 | - *Production edit: Related to https://github.com/godotengine/godot/issues/54644.*
### Tested versions
v4.3.stable.steam [77dcf97d8]
### System information
Windows 10
### Issue description
Just trying to get a hang of material importing in Godot and I think I found a really annoying bug. I was making a road texture with the road lines as a detail texure on top. The asphalt and the roadlines have 2 seperate UVs that both imported properly
I used the detail section of the material to put the roadlines on top and set it to UV2. The roadlines need a mask to mask out just the roadlines.
Everything works fine without the mask:

But when I add the (obviously needed) mask to the detail section it reverts to UV1 even though its still set to UV2

### Steps to reproduce
Take a mesh with two UV maps.
enable Detail in material, Set Detail to UV2, add Detail Mask
### Minimal reproduction project (MRP)
[detailmask_bugreport.zip](https://github.com/user-attachments/files/17440750/detailmask_bugreport.zip)
all you have to do is go into the surface 0 material on the included model and under Detail/Mask add the Roadlines_mask.png
before you add that mask you can see UV2 is working properly. After you add the mask you can see it's still using UV1 | bug,topic:rendering,topic:3d | low | Critical |
2,598,432,190 | excalidraw | Option to Disable AI? | I work for a school district that restricts the use of AI in instructional technology tools online. Is there an option to disable the AI functionality of Excalidraw? If there is not an option, is that something that could be added to the service? | enhancement,ai | low | Major |
2,598,467,286 | PowerToys | Text Extractor Shortcut is not working properly. | ### Microsoft PowerToys version
v0.85.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
TextExtractor
### Steps to reproduce
Text Extractor Shortcut (```Win + Shift + T```) is not working properly.
I tried different custom shortcut options. Unfortunately, the result is same (not working)
### ✔️ Expected Behavior
Pressing the ```WIN + SHIFT + T``` opens up a prompt for selecting some text from an image.
### ❌ Actual Behavior
Nother is happening.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,598,481,909 | ui | [feat]: Document use of aliases 'components' and 'utils' | ### Feature description
First of all, thanks a lot for your work on `shadcn/ui`!
The [documentation for `components.json` aliases](https://ui.shadcn.com/docs/components-json#aliases) doesn't explain the purpose of the `components` and `utils` aliases. I would particularly like to know how they differ from the `ui` and `lib` aliases.
To figure out how these paths are used, I ran the command `npx shadcn@latest init` (to create a Next.js project on an empty folder), followed by `npx shadcn@latest add --all` to add all components.
Files were created in `@/lib`, `@/hooks` and `@ui`, but not for the `components` and `utils` aliases. This makes me wonder if those other aliases are used at all currently.
I think any clarification here would help users better understand the project structure. Thanks for your time!
**Note:** A clue about the `component` vs `ui` aliases might be this phrase found in the changelog:
> The add command is now much more capable. You can now add UI components but also import more complex components (coming soon). | area: request | low | Minor |
2,598,482,304 | godot | SubViewportContainer is rendered only once the first frame and never updates the drawing on Safari | ### Tested versions
IOS 18.0.1
Chrome:
<img width="682" alt="image" src="https://github.com/user-attachments/assets/e56c9bad-b976-499f-bcde-89f4b257d785">
Safari:
<img width="282" alt="image" src="https://github.com/user-attachments/assets/04b9578d-830d-44c9-9d59-315620a9f02f">
- Reproducible in 4.2, 4.2.2, 4.3, 4.3.dev1
### System information
Godot v4.3.stable - macOS 15.0.1 - GLES3 (Compatibility) - Apple M1 - Apple M1 (8 Threads)
### Issue description
All of my Godot games crashed in the cloud on my iPhone and Mac after I upgraded both. I found the issue after a week of figuring. My Linux (20.04), Windows (8, 10, 11), and other Macs (older versions which is 14.3) don't have this issue. The Mac I'm using right now, where I'm typing to you, is already upgraded to the latest version, which is 15.0.1. I can't see my project due to a SubViewportContainer issue. There are no error logs in HTML5. I was able to see it in Chrome but not in Safari. My iPhone couldn't see the data either. See the video:
https://drive.google.com/file/d/1B40qT4eeABohT4miUnMhjiavpCuOUWfe/view?usp=sharing
The right browser is Chrome, and the left browser is Safari.
There are two lines that I can reproduce, and I have this in a zip file as well.
```gdscript
var white_rectangle = $SubViewportContainer/SubViewport/ColorRect # BUG HERE
# var white_rectangle = $ColorRect # WORKING
```
If you comment out the first line and uncomment the second, you will see it working in Safari. However, when you comment out the second line and uncomment the first, you will see the bug without any error logs.
First line is inside the node called subviewport node. The second line is outside of the node called SubViewPort.
### Steps to reproduce
1) run HTML tester in safari on 15.0.1 Mac OS only
2) comment the first line out and uncomment the second line and then re-run html5 export.
### Minimal reproduction project (MRP)
[rendering_bug.zip](https://github.com/user-attachments/files/17441266/rendering_bug.zip)
| bug,platform:ios,topic:rendering,confirmed | low | Critical |
2,598,483,128 | deno | `deno run -A npm:create-hono@latest` doesn't exit after creating | Version: Deno 2.0.2
I ran `deno run --log-level=debug -A npm:create-hono@latest`, while the template is created, the application never exits. Here is a debug output
```
$ deno run --log-level=debug -A npm:create-hono@latest
DEBUG RS - deno::args:623 - No .npmrc file found
DEBUG RS - deno::args:930 - Finished config loading.
DEBUG RS - deno::cache::cache_db:168 - Opening cache /home/akshay/.cache/deno/dep_analysis_cache_v2...
DEBUG RS - deno::cache::cache_db:168 - Opening cache /home/akshay/.cache/deno/node_analysis_cache_v2...
DEBUG RS - deno::cache::cache_db:168 - Opening cache /home/akshay/.cache/deno/v8_code_cache_v2...
DEBUG RS - deno::npm::managed::resolution:282 - Running npm resolution.
DEBUG RS - deno_npm::resolution::graph:932 - <package-req> - Resolved create-hono@latest to create-hono@0.14.1
DEBUG RS - deno::npm::managed:324 - Resolved package folder of create-hono@0.14.1 to /home/akshay/.cache/deno/npm/registry.npmjs.org/create-hono/0.14.1
DEBUG RS - deno::js:10 - Deno isolate init with snapshots.
DEBUG RS - deno::worker:186 - main_module file:///home/akshay/.cache/deno/npm/registry.npmjs.org/create-hono/0.14.1/bin
DEBUG RS - deno_runtime::worker:529 - Updating V8 code cache for script: file:///home/akshay/.cache/deno/npm/registry.npmjs.org/create-hono/0.14.1/bin, [16424317911244797316]
create-hono version 0.14.1
? Target directory my-app
? Which template do you want to use? deno
⠋ Cloning the templateDEBUG RS - hyper_util::client::legacy::connect::dns:122 - resolving host="api.github.com"
DEBUG RS - hyper_util::client::legacy::connect::http:643 - connecting to 140.82.114.5:443
⠴ Cloning the templateDEBUG RS - hyper_util::client::legacy::connect::http:646 - connected to 140.82.114.5:443
⠙ Cloning the templateDEBUG RS - h2::client:1281 - binding client connection
DEBUG RS - h2::client:1286 - client connection bound
DEBUG RS - h2::codec::framed_write:213 - send frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384, max_header_list_size: 16384 }
DEBUG RS - h2::proto::connection:138 - Connection; peer=Client
DEBUG RS - hyper_util::client::legacy::pool:396 - pooling idle connection for ("https", api.github.com)
DEBUG RS - h2::codec::framed_write:213 - send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
DEBUG RS - h2::codec::framed_write:213 - send frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
⠸ Cloning the templateDEBUG RS - h2::codec::framed_read:405 - received frame=Settings { flags: (0x0), max_concurrent_streams: 100, initial_window_size: 67108864, max_frame_size: 68608, enable_connect_protocol: 1 }
DEBUG RS - h2::codec::framed_write:213 - send frame=Settings { flags: (0x1: ACK) }
DEBUG RS - h2::codec::framed_read:405 - received frame=Settings { flags: (0x1: ACK) }
DEBUG RS - h2::proto::settings:56 - received settings ACK; applying Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384, max_header_list_size: 16384 }
DEBUG RS - h2::codec::framed_read:405 - received frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
DEBUG RS - hyper_util::client::legacy::connect::dns:122 - resolving host="codeload.github.com"
DEBUG RS - hyper_util::client::legacy::connect::http:643 - connecting to 4.237.22.35:443
⠼ Cloning the templateDEBUG RS - hyper_util::client::legacy::connect::http:646 - connected to 4.237.22.35:443
DEBUG RS - h2::client:1281 - binding client connection
DEBUG RS - h2::client:1286 - client connection bound
DEBUG RS - h2::codec::framed_write:213 - send frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384, max_header_list_size: 16384 }
DEBUG RS - h2::proto::connection:138 - Connection; peer=Client
DEBUG RS - hyper_util::client::legacy::pool:396 - pooling idle connection for ("https", codeload.github.com)
DEBUG RS - h2::codec::framed_write:213 - send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
DEBUG RS - h2::codec::framed_write:213 - send frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
⠦ Cloning the templateDEBUG RS - h2::codec::framed_read:405 - received frame=Settings { flags: (0x0), max_concurrent_streams: 100, initial_window_size: 67108864, max_frame_size: 68608, enable_connect_protocol: 1 }
DEBUG RS - h2::codec::framed_write:213 - send frame=Settings { flags: (0x1: ACK) }
DEBUG RS - h2::codec::framed_read:405 - received frame=Settings { flags: (0x1: ACK) }
DEBUG RS - h2::proto::settings:56 - received settings ACK; applying Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384, max_header_list_size: 16384 }
⠙ Cloning the templateDEBUG RS - h2::codec::framed_read:405 - received frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
✔ Cloning the template
DEBUG RS - h2::codec::framed_read:405 - received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(1) }
DEBUG RS - h2::codec::framed_read:405 - received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(1) }
```
Seems to be working fine with npm
```
$ npm create hono@latest
Need to install the following packages:
create-hono@0.14.1
Ok to proceed? (y)
> npx
> create-hono
create-hono version 0.14.1
? Target directory my-app
? Which template do you want to use? deno
✔ Cloning the template
npm notice
npm notice New minor version of npm available! 10.8.0 -> 10.9.0
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.9.0
npm notice To update run: npm install -g npm@10.9.0
npm notice
``` | bug,node compat | low | Critical |
2,598,491,028 | ui | [feat]: Copy "style" info from changelog to `components.json` docs. | ### Feature description
The most detailed information about the "style" configuration is currently available only on the [changelog page](https://ui.shadcn.com/docs/changelog#styles). For better discoverability, I think it would be useful to copy that into the [`components.json` docs, into the "style" section](https://ui.shadcn.com/docs/components-json#style).
Based on [this discussion](https://github.com/shadcn-ui/ui/discussions/2930), I think some people were confused about "style", and would have benefited from seeing the information which is currently on the changelog.
Thanks!
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.