id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,677,603,659 | PowerToys | Problems with Taskbar and selecting with the Mouse | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
I've been using PowerToys in its various iterations in Windows for more than a decade. I realize the recent incarnation is new and unrelated to versions that ran on earlier versions of Windows. Kudos to all those who work on these wonderful toys.
I cannot give specifics, nor do I want to duplicate the steps that led to the problem, but coincidental to updating PowerToys last week, I've had problems with selecting items in the Taskbar (what we used to call the System Tray). I attempted to disable various toys related to the mouse to correct the issue, but with no luck. I ended up uninstalling PowerToys, rebooting, and then reinstalling directly from the Microsoft PowerToys site. Co far, all looks good.
### ✔️ Expected Behavior
I expect to be able to select items in the System Tray and interact with them.
### ❌ Actual Behavior
The mouse worked everywhere except in the System Tray.
### Other Software
I uninstalled PowerToys, rebooted, and reinstalled. I do not know the previous version, but I updated last week, so it too was likely 0.86.0. | Issue-Bug,Needs-Triage | low | Minor |
2,677,608,751 | go | proposal: testing: add additional data to testing.T.Context | ### Proposal Details
The accepted proposal https://github.com/golang/go/issues/36532 added a Context method to `testing.T`, which is great, but I can think of two straightforward ways to improve its usefulness further:
1. Expose `T.Deadline()` as the deadline for the context. There are (hopefully rare) cases where something needs to propagate deadline information to something that isn't go, e.g. a subprocess, where simple cancelation may not be sufficient. Even in cases where the final consumer of the context is go, it's helpful for e.g. logging to be able to distinguish between a context cancelled due to a timeout vs. cancelled for some other reason. This should be as simple as swapping out `context.WithCancel` for `context.WithDeadline` where appropriate.
2. Create a `trace.Task` named for the test. When running a trace or benchmark with `-trace`, it can be handy to be able to track back regions to specific tests. This, too, is fairly low-cost given that the `testing` package already depends on `runtime/trace`. | Proposal | low | Minor |
2,677,611,111 | deno | Wasm module importing specifier that can't be resolved is missing referrer in error message | ```wat
;; toolkit.wat
(module
(import "env" "get_time_in_seconds" (func $get_time (result i32)))
(func (export "getValue") (result i32)
call $get_time
)
)
```
```js
// env.ts
function getTimeInSeconds() {
return Date.now() / 1000;
}
export { getTimeInSeconds as get_time_in_seconds };
```
```js
// main.ts
import { getValue } from "./toolkit.wasm";
console.log(getValue());
```
Actual:
```shellsession
> wat2wasm toolkit.wat
> deno run main.ts
error: Relative import path "env" not prefixed with / or ./ or ../
```
Expected:
```shellsession
> wat2wasm toolkit.wat
> deno run main.ts
error: Relative import path "env" not prefixed with / or ./ or ../
at file:///.../toolkit.wasm
``` | bug,wasm,dx | low | Critical |
2,677,631,079 | deno | Import map unfurl Wasm module import specifiers on publish | Publishing a module to JSR should unfurl the import specifiers inside a Wasm module. | bug,publish | low | Minor |
2,677,644,738 | next.js | Catch-all route also intercepts and catches static assets and js chunks | ### Link to the code that reproduces this issue
https://github.com/r6203/next-catch-all-bug
### To Reproduce
1. Create a catch-all route
2. run `next dev`
3. go to any static route or js chunk or view server logs to see chunks intercepted
### Current vs. Expected behavior
Having a catch all route at the root of the app shouldn't intercept next generated assets or static assets.
Expect a working app where each non asset request resolves to the catch-all route but requests to next generated assets (_next/**/*) to resolve to the corresponding files.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #48-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 14:04:52 UTC 2024
Available memory (MB): 60075
Available CPU cores: 16
Binaries:
Node: 22.2.0
npm: 10.7.0
Yarn: N/A
pnpm: 9.9.0
Relevant Packages:
next: 14.2.12 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.4.5
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
For an in-production example of this issue with a workaround see https://github.com/session-foundation/websites/tree/main/apps/foundation
Note this is reopening this issue https://github.com/vercel/next.js/issues/67806 | bug,Navigation,Runtime | low | Critical |
2,677,660,393 | godot | [4.3] Using color picker in Project Settings doesn't stick | ### Tested versions
Reproducible in 4.3.stable
Not reproducible in 4.2.2.stable, 4.4.dev3
### System information
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org Vulkan 1.3.250 - Forward+ - Using Device #0: Intel - Intel(R) Iris(R) Xe Graphics
### Issue description
Changing the Default Clear Color from Project Settings by using the color picker outside of the Godot window fails to update when program is run. The color preview for the Default Clear Color changes appropriately in Project Setting and even the editor preview changes, however when the program is run it uses the previous value Default Clear Color. If Project Settings is reopened it still shows the picked color. However, if the project is reloaded the Default Clear Color reverts to the previous value in Project setting, editor preview, and when running the program.
I confirmed the same behavior when changing the Debug Shape Color and I assume it is the same for all the others.
I tested for similar behavior on modulate in the inspector however it worked correctly.
(Note: I am not using Single Window Mode - the color is picking fine just not sticking)
PS - First bug report, let me know if there is anything I missed!
### Steps to reproduce
1. Add a Node2D node and save the program.
2. Go to Project Settings
3. Change Default Clear Color using color picker outside of Godot window
4. Run program
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing | low | Critical |
2,677,689,858 | vscode | Find & Replace in Files => Very Slow on Mac |
Type: <b>Bug</b>
After recent upgrade it became super slow. I executed "Find and Replace" action in files, about 200 files, about 400 replacements, it created many "rg" processes on MacOs which slowed Mac for about 5-10 minutes, coplete slowdown!!! It reports that files were processed, but I see "rg" processes very long time, not sure, I feel it hangs, I cannot run any Terminal commands for example. Happened few times already.
VS Code version: Code 1.95.3 (Universal) (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|854, 236, 512|
|Memory (System)|64.00GB (1.14GB free)|
|Process Argv|--crash-reporter-id 176211f7-0493-4455-86c6-1e05a06f6d84|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (45)</summary>
Extension|Author (truncated)|Version
---|---|---
project-manager|ale|12.8.0
calva|bet|2.0.481
calva-spritz|bet|1.0.5
vscode-tailwindcss|bra|0.12.14
dart-code|Dar|3.100.0
flutter|Dar|3.100.0
code-runner|for|0.12.2
get-snippets|get|4.3.0
go|gol|0.42.1
arb-editor|Goo|0.2.1
firebase-snippets|has|0.0.1
vscode-drawio|hed|1.6.6
Sbt|itr|0.1.7
flutter-tree|mar|1.0.0
git-graph|mhu|1.30.0
xml-format|mik|1.1.3
inline-fold|moa|0.2.6
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.2
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.388.0
vscode-yaml-sort|Pas|6.6.0
vscode-thunder-client|ran|2.30.0
java|red|1.36.0
LiveServer|rit|5.7.9
flutter-riverpod-snippets|rob|1.2.2
scala|sca|0.5.8
markdown-preview-enhanced|shd|0.8.15
iconfont-preview|stx|0.0.5
even-better-toml|tam|0.19.2
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
markdown-all-in-one|yzh|3.6.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
da93g388:31013173
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,search,perf | low | Critical |
2,677,689,873 | kubernetes | Add support for environment variable resolution in `httpGet` probe paths | Currently, Kubernetes does not support resolving environment variables in `httpGet` probe paths, which limits configuration flexibility.
Expected behavior:
- Allow environment variable interpolation like `path: "$(CONTEXT_PATH)/health/"`
- ~~Support standard shell-like variable substitution syntax~~
Sample configuration:
```yaml
readinessProbe:
httpGet:
path: "$(CONTEXT_PATH)/actuator/health/readiness"
port: 8080
scheme: HTTP
```
| sig/node,kind/feature,needs-triage | medium | Major |
2,677,715,118 | kubernetes | After the kube-scheduler is scheduled, the kubelet only starts ADD after a half-hour interval | ### What happened?
When creating a pod,after the kube-scheduler is scheduled, the kubelet only starts ADD after a half-hour interval
kube-scheduler.log:
`I1115 04:12:00.190703 10 schedule_one.go:286] "Successfully bound pod to node" pod="kube-system/registry-6cc84d4599-d7v2z" node="master1" evaluatedNodes=1 feasibleNodes=1`
kubelet.log:
`I1115 04:47:49.244673 504031 kubelet.go:2430] "SyncLoop ADD" source="api" pods=["kube-system/registry-6cc84d4599-d7v2z"]
`
There are a large number of suspicious logs in kubelet:
```
I1115 04:47:45.496346 504031 trace.go:236] Trace[836595825]: "container_status" containerId:c6c4174db0f85f98c4fae710d3f3a81807db30503dd7a75cc480506faba33879 (15-Nov-2024 04:47:44.439) (total time: 1056ms):
Trace[836595825]: [1.056580909s] [1.056580909s] END
I1115 04:47:45.498660 504031 trace.go:236] Trace[1152628921]: "container_status" containerId:f0020fb44ea49684f6b341b6fe58eae00cb6add89eaa217d46f1c0068bf53719 (15-Nov-2024 04:47:44.444) (total time: 1054ms):
Trace[1152628921]: [1.054541648s] [1.054541648s] END
I1115 04:47:45.500464 504031 trace.go:236] Trace[1799469360]: "container_status" containerId:efe80b266c59d35549d0e52033f80d7eae57733907adaf5b7b045bbd7df2bdb0 (15-Nov-2024 04:47:44.437) (total time: 1062ms):
Trace[1799469360]: [1.062659482s] [1.062659482s] END
I1115 04:47:45.500778 504031 trace.go:236] Trace[2101472811]: "container_status" containerId:f00a1f977b47b0948077d16d3e45e923d96dfa3789728af042e18e9976953377 (15-Nov-2024 04:47:44.447) (total time: 1053ms):
Trace[2101472811]: [1.053375625s] [1.053375625s] END
I1115 04:47:45.648080 504031 manager.go:229] "Device plugin connected" resourceName="resource.sop.huawei.com/fuse"
I1115 04:47:45.728218 504031 trace.go:236] Trace[1018875239]: "Calculate volume metrics of sidecar for pod sop/invmgrservice-77bc956598-s2hw7" (15-Nov-2024 04:47:43.386) (total time: 2341ms):
Trace[1018875239]: [2.341411012s] [2.341411012s] END
I1115 04:47:45.728671 504031 trace.go:236] Trace[2068757664]: "Calculate volume metrics of weave-version for pod manager/weave-0" (15-Nov-2024 04:47:42.023) (total time: 3704ms):
Trace[2068757664]: [3.704755612s] [3.704755612s] END
I1115 04:47:45.729295 504031 trace.go:236] Trace[1909652043]: "Calculate volume metrics of pkg for pod sop/odaepipelinemgrservice-cf46cd978-cfbh6" (15-Nov-2024 04:47:43.725) (total time: 2003ms):
Trace[1909652043]: [2.003296282s] [2.003296282s] END
I1115 04:47:45.808842 504031 trace.go:236] Trace[1375719882]: "Calculate volume metrics of log for pod sop/litecaservice-5676c5776f-shg86" (15-Nov-2024 04:47:44.090) (total time: 1718ms):
Trace[1375719882]: [1.718048823s] [1.718048823s] END
I1115 04:47:45.917400 504031 trace.go:236] Trace[700771020]: "Calculate volume metrics of mypkg for pod sop/cloudsopfmdds-0" (15-Nov-2024 04:47:44.447) (total time: 1469ms):
Trace[700771020]: [1.469530583s] [1.469530583s] END
I1115 04:47:45.917850 504031 trace.go:236] Trace[1173662521]: "Calculate volume metrics of pkg for pod manager/commonfeaturetestservice-c66fdfb5-2s7vt" (15-Nov-2024 04:47:44.448) (total time: 1469ms):
Trace[1173662521]: [1.469245066s] [1.469245066s] END
I1115 04:47:45.939300 504031 trace.go:236] Trace[1820919694]: "image_status" image:registry.caas.local/default/sop_base_image-aarch64:25.550.1660 (15-Nov-2024 04:47:44.736) (total time: 1202ms):
Trace[1820919694]: [1.202447499s] [1.202447499s] END
I1115 04:47:46.009009 504031 trace.go:236] Trace[1413003384]: "podsandbox_status" podSandboxId:3e610ebb799daf9f056931b5138e8fd9956efad36b808b9d0a64dde4af2f35cd (15-Nov-2024 04:47:44.441) (total time: 1567ms):
Trace[1413003384]: [1.567586292s] [1.567586292s] END
I1115 04:47:46.022472 504031 trace.go:236] Trace[853699448]: "container_status" containerId:44d27b1d406fb41a25f89d7e78915e5a1b8e1fdba687263907edb772a7dec413 (15-Nov-2024 04:47:44.446) (total time: 1576ms):
Trace[853699448]: [1.576327467s] [1.576327467s] END
I1115 04:47:46.024482 504031 trace.go:236] Trace[1051222000]: "container_status" containerId:ec6907e308677afe48fdf462890686bbb3aeea7ac545b401d7b55a34ebb1c0ed (15-Nov-2024 04:47:44.447) (total time: 1576ms):
Trace[1051222000]: [1.57649245s] [1.57649245s] END
I1115 04:47:46.024723 504031 trace.go:236] Trace[729411205]: "container_status" containerId:a375d2e8092d921fee7974faf450b2aa87fecda0a04b84a975ba209dc8e35842 (15-Nov-2024 04:47:44.447) (total time: 1577ms):
Trace[729411205]: [1.577505491s] [1.577505491s] END
I1115 04:47:46.024943 504031 trace.go:236] Trace[2047128525]: "container_status" containerId:1a4c4aa97ace171ce07842f077bf2215bb5d5275a2f3d5a327d0179b7dbec140 (15-Nov-2024 04:47:44.448) (total time: 1576ms):
Trace[2047128525]: [1.576844697s] [1.576844697s] END
I1115 04:47:46.045702 504031 trace.go:236] Trace[1650587259]: "podsandbox_status" podSandboxId:d5112ea26fbcc0c290f47378e8c5a9821926c48013bafa279afbe890bc921307 (15-Nov-2024 04:47:44.446) (total time: 1598ms):
Trace[1650587259]: [1.598921141s] [1.598921141s] END
I1115 04:47:46.046134 504031 trace.go:236] Trace[512891868]: "container_status" containerId:fc2660fd77348f40b2ac2d1ce733e49350851a2d9cd71d11a0af26286b079dd9 (15-Nov-2024 04:47:44.446) (total time: 1599ms):
Trace[512891868]: [1.599562164s] [1.599562164s] END
I1115 04:47:46.046385 504031 trace.go:236] Trace[125258435]: "list_containers" filter.id:,filter.podSandboxId: (15-Nov-2024 04:47:44.441) (total time: 1605ms):
Trace[125258435]: [1.60535456s] [1.60535456s] END
I1115 04:47:46.046553 504031 trace.go:236] Trace[465435416]: "list_containers" filter.id:,filter.podSandboxId: (15-Nov-2024 04:47:44.447) (total time: 1599ms):
Trace[465435416]: [1.599515223s] [1.599515223s] END
I1115 04:47:46.046716 504031 trace.go:236] Trace[1559500248]: "list_containers" filter.id:,filter.podSandboxId: (15-Nov-2024 04:47:44.445) (total time: 1600ms):
Trace[1559500248]: [1.600790269s] [1.600790269s] END
I1115 04:47:46.046845 504031 trace.go:236] Trace[436961003]: "list_containers" filter.id:,filter.podSandboxId: (15-Nov-2024 04:47:44.447) (total time: 1599ms):
Trace[436961003]: [1.599288828s] [1.599288828s] END
I1115 04:47:46.046969 504031 trace.go:236] Trace[1952747957]: "exec_sync" containerId:26f9bd6bd9764d585e0f492827ae16f28f3c3ed668fb514305501eacfffad827,cmd:[curl --unix-socket /tmp/health.sock http://unix/healthz],timeou
t:10s (15-Nov-2024 04:47:43.311) (total time: 2735ms):
Trace[1952747957]: [2.735205761s] [2.735205761s] END
I1115 04:47:46.092320 504031 prober.go:107] "Probe failed" probeType="Readiness" pod="manager/test3th1kafkaservice-kafka-0" podUID="7d9f9635-db0a-4e03-b59d-4e9e4ee3cee2" containerName="test3th1kafkaservice-kafka" probeR
esult="failure" output="dial tcp 172.18.138.16:8002: connect: connection refused"
I1115 04:47:46.229808 504031 prober.go:107] "Probe failed" probeType="Liveness" pod="manager/cronserviceinfra-849db64866-7cmgm" podUID="30bbc033-fd40-455b-a1f4-2c49790110c9" containerName="cronserviceinfra" probeResult=
"failure" output="dial tcp 172.18.138.0:8001: i/o timeout"
```
```
```
```
```
### What did you expect to happen?
After scheduling, kubelet will be added normally
### How can we reproduce it (as minimally and precisely as possible)?
What factors cause the above reasons and how to solve them
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
v1.28.1
</details>
### Cloud provider
<details>
na
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| sig/scheduling,kind/support,needs-triage | low | Critical |
2,677,727,807 | pytorch | torch.compile + Huggingface GenerationMixin | ### 🐛 Describe the bug
error: https://gist.github.com/xmfan/7374fab55bdf73ba2501de15dd9de709
```
ValueError: The following `model_kwargs` are not used by the model: ['bos_token_id', 'pad_token_id', 'eos_token_id', 'max_length', 'do_sample', 'top_p', 'top_k', 'temperature', 'num_return_sequences', 'num_beams', 'length_penalty', 'repetition_penalty']
```
`GenerationMixin.generate` contains the implementation of most HF generative models' `forward` pass. During `GenerationMixin.generate`, `GenerationMixin._validate_model_kwargs` is called, and it raises an exception if not all model kwargs passed are used by the model: https://github.com/huggingface/transformers/blob/40821a247823b35d7ff10ba490d0d930fe8f5afa/src/transformers/generation/utils.py#L1380-L1384. This error only appears if we do a top-level compile (it works if we directly wrap the HF model class with torch.compile), see repro below.
monkey patching some prints, the difference seems to be from different model_kwargs:
```python
# eager
model_kwargs={
'attention_mask': ...,
'd_vector': None,
'input_tokens': None,
'voice_dirs': None}
model_args={'attention_mask',
'encoder_attention_mask',
'encoder_hidden_states',
'head_mask',
'input_ids',
'inputs_embeds',
'kwargs',
'labels',
'output_attentions',
'output_hidden_states',
'past_key_values',
'position_ids',
'return_dict',
'token_type_ids',
'use_cache'}
# compile
model_kwargs={
'attention_mask': ...,
'bos_token_id': 1024,
'd_vector': None,
'do_sample': True,
'eos_token_id': 1025,
'input_tokens': None,
'length_penalty': 1.0,
'max_length': 650,
'num_beams': 1,
'num_return_sequences': 1,
'output_attentions': False,
'pad_token_id': 1025,
'repetition_penalty': 5.0,
'temperature': 0.75,
'top_k': 50,
'top_p': 0.85,
'voice_dirs': None}
model_args={
'attention_mask',
'encoder_attention_mask',
'encoder_hidden_states',
'head_mask',
'input_ids',
'inputs_embeds',
'kwargs',
'labels',
'output_attentions',
'output_hidden_states',
'past_key_values',
'position_ids',
'return_dict',
'token_type_ids',
'use_cache'}
```
Repro
install repo via frozen_requirements.txt: https://github.com/xmfan/coqui-ai-TTS/tree/empathy
```python
import torch
from TTS.api import TTS
import time
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)
# Run TTS
def fn():
tts.tts(text="Hello from XTTS2. I am being tested for the torch.compile User Empathy Day on Nov 20th 2024.", speaker_wav="en_sample.wav", language="en")
@torch.compile(backend="eager")
def warmup(its=5):
for i in range(its):
start = time.time()
fn()
duration = time.time() - start
print(f"warm up i={i} took {duration}s")
warmup()
```
### Versions
install the repo using `frozen_requirements.txt`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | high priority,triaged,oncall: pt2,module: dynamo | low | Critical |
2,677,727,954 | rust | ICE: `converted TraitPredicate` / ` invalid predicate filter for 'remap_gat_vars_and_recurse_into_nested_projections'` | <!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/item_bounds.rs:129:25: 'internal error: entered unreachable code: invalid predicate filter for `remap_gat_vars_and_recurse_into_nested_projections`'', 'thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/item_bounds.rs:129:25: 'internal error: entered unreachable code: invalid predicate filter for `remap_gat_vars_and_recurse_into_nested_projections`''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
#[const_trait]
trait Foo3<T>
where
Self::Bar: Clone,
Self::Baz: Clone,
{
type Baz = T;
}
````
original:
````rust
//@ check-pass
//@ compile-flags: -Znext-solver
#![feature(const_trait_impl)]
#[const_trait]
pub trait Owo<X = <IntEnum as Uwu>::T> {}
#[const_trait]
trait Foo3<T>
where
Self::Bar: Clone,
Self::Baz: Clone,
{
type Bar = Vec<Self::Baz>;
type Baz = T;
//~^ ERROR the trait bound `T: Clone` is not satisfied
}
````
Version information
````
rustc 1.84.0-nightly (2d0ea7956 2024-11-20)
binary: rustc
commit-hash: 2d0ea7956c45de6e421fd579e2ded27be405dec6
commit-date: 2024-11-20
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/2d0ea7956c45de6e421fd579e2ded27be405dec6/compiler/rustc_hir_analysis/src/collect/predicates_of.rs#L1045-L1057
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0658]: `const_trait` is a temporary placeholder for marking a trait that is suitable for `const` `impls` and all default bodies as `const`, which may be removed or renamed in the future.
--> /tmp/icemaker_global_tempdir.3JnExAaPPyjX/rustc_testrunner_tmpdir_reporting.6QTzPvkyMQg2/mvce.rs:1:1
|
1 | #[const_trait]
| ^^^^^^^^^^^^^^
|
= note: see issue #67792 <https://github.com/rust-lang/rust/issues/67792> for more information
= help: add `#![feature(const_trait_impl)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-20; consider upgrading it if it is out of date
error[E0658]: associated type defaults are unstable
--> /tmp/icemaker_global_tempdir.3JnExAaPPyjX/rustc_testrunner_tmpdir_reporting.6QTzPvkyMQg2/mvce.rs:7:5
|
7 | type Baz = T;
| ^^^^^^^^^^^^^
|
= note: see issue #29661 <https://github.com/rust-lang/rust/issues/29661> for more information
= help: add `#![feature(associated_type_defaults)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-20; consider upgrading it if it is out of date
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.3JnExAaPPyjX/rustc_testrunner_tmpdir_reporting.6QTzPvkyMQg2/mvce.rs:8:2
|
8 | }
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.3JnExAaPPyjX/rustc_testrunner_tmpdir_reporting.6QTzPvkyMQg2/mvce.rs`
error[E0220]: associated type `Bar` not found for `Self`
--> /tmp/icemaker_global_tempdir.3JnExAaPPyjX/rustc_testrunner_tmpdir_reporting.6QTzPvkyMQg2/mvce.rs:4:11
|
4 | Self::Bar: Clone,
| ^^^ help: there is an associated type with a similar name: `Baz`
error: internal compiler error: compiler/rustc_hir_analysis/src/collect/predicates_of.rs:1051:26: converted TraitPredicate(<<Self as Foo3<T>>::Baz as std::clone::Clone>, polarity:Positive)
thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/predicates_of.rs:1051:26:
Box<dyn Any>
stack backtrace:
0: 0x7ca0476734fa - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h047e0d5e47450165
1: 0x7ca047e24fe2 - core::fmt::write::h02224197acc1c478
2: 0x7ca049283b11 - std::io::Write::write_fmt::haf49afba339ba238
3: 0x7ca047673352 - std::sys::backtrace::BacktraceLock::print::hb9734e16f0c36e6c
4: 0x7ca04767582a - std::panicking::default_hook::{{closure}}::h1a248142e8880b97
5: 0x7ca047675690 - std::panicking::default_hook::h922c0a7f60dece2a
6: 0x7ca0466faa65 - std[a682ed40c8c5f211]::panicking::update_hook::<alloc[b6fe8004693423c5]::boxed::Box<rustc_driver_impl[ccb010d605d21fce]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7ca047675f08 - std::panicking::rust_panic_with_hook::h7c200c3622363a4f
8: 0x7ca046734901 - std[a682ed40c8c5f211]::panicking::begin_panic::<rustc_errors[1430c921f1781f5b]::ExplicitBug>::{closure#0}
9: 0x7ca0467278c6 - std[a682ed40c8c5f211]::sys::backtrace::__rust_end_short_backtrace::<std[a682ed40c8c5f211]::panicking::begin_panic<rustc_errors[1430c921f1781f5b]::ExplicitBug>::{closure#0}, !>
10: 0x7ca046722e99 - std[a682ed40c8c5f211]::panicking::begin_panic::<rustc_errors[1430c921f1781f5b]::ExplicitBug>
11: 0x7ca04673e831 - <rustc_errors[1430c921f1781f5b]::diagnostic::BugAbort as rustc_errors[1430c921f1781f5b]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7ca046db2873 - rustc_middle[ab7e7e2c8044b8bd]::util::bug::opt_span_bug_fmt::<rustc_span[7ad9000666f00a6a]::span_encoding::Span>::{closure#0}
13: 0x7ca046d99e2a - rustc_middle[ab7e7e2c8044b8bd]::ty::context::tls::with_opt::<rustc_middle[ab7e7e2c8044b8bd]::util::bug::opt_span_bug_fmt<rustc_span[7ad9000666f00a6a]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7ca046d99cbb - rustc_middle[ab7e7e2c8044b8bd]::ty::context::tls::with_context_opt::<rustc_middle[ab7e7e2c8044b8bd]::ty::context::tls::with_opt<rustc_middle[ab7e7e2c8044b8bd]::util::bug::opt_span_bug_fmt<rustc_span[7ad9000666f00a6a]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7ca044eeefd0 - rustc_middle[ab7e7e2c8044b8bd]::util::bug::bug_fmt
16: 0x7ca0468f20d6 - rustc_hir_analysis[1b3d93177a6f01f5]::collect::predicates_of::implied_const_bounds
17: 0x7ca0471a1237 - rustc_query_impl[816e84f5405d42e3]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[816e84f5405d42e3]::query_impl::implied_const_bounds::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 16usize]>>
18: 0x7ca04717ea09 - <rustc_query_impl[816e84f5405d42e3]::query_impl::implied_const_bounds::dynamic_query::{closure#2} as core[d4b455c786e6b169]::ops::function::FnOnce<(rustc_middle[ab7e7e2c8044b8bd]::ty::context::TyCtxt, rustc_span[7ad9000666f00a6a]::def_id::DefId)>>::call_once
19: 0x7ca04801195a - rustc_query_system[6d3fbfb79102f8b4]::query::plumbing::try_execute_query::<rustc_query_impl[816e84f5405d42e3]::DynamicConfig<rustc_query_system[6d3fbfb79102f8b4]::query::caches::DefIdCache<rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[816e84f5405d42e3]::plumbing::QueryCtxt, false>
20: 0x7ca0471ad3a0 - rustc_query_impl[816e84f5405d42e3]::query_impl::implied_const_bounds::get_query_non_incr::__rust_end_short_backtrace
21: 0x7ca04800f34d - rustc_middle[ab7e7e2c8044b8bd]::query::plumbing::query_get_at::<rustc_query_system[6d3fbfb79102f8b4]::query::caches::DefIdCache<rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 16usize]>>>
22: 0x7ca04690494f - rustc_hir_analysis[1b3d93177a6f01f5]::check::compare_impl_item::check_type_bounds
23: 0x7ca048ff035f - rustc_hir_analysis[1b3d93177a6f01f5]::check::check::check_item_type
24: 0x7ca044d52c12 - rustc_hir_analysis[1b3d93177a6f01f5]::check::wfcheck::check_well_formed
25: 0x7ca0486fea67 - rustc_query_impl[816e84f5405d42e3]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[816e84f5405d42e3]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 1usize]>>
26: 0x7ca0486fed40 - rustc_query_system[6d3fbfb79102f8b4]::query::plumbing::try_execute_query::<rustc_query_impl[816e84f5405d42e3]::DynamicConfig<rustc_data_structures[5bd100dcdab3ae53]::vec_cache::VecCache<rustc_span[7ad9000666f00a6a]::def_id::LocalDefId, rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[6d3fbfb79102f8b4]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[816e84f5405d42e3]::plumbing::QueryCtxt, false>
27: 0x7ca0486fea46 - rustc_query_impl[816e84f5405d42e3]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
28: 0x7ca0486ff7ec - rustc_hir_analysis[1b3d93177a6f01f5]::check::wfcheck::check_mod_type_wf
29: 0x7ca0486ff60b - rustc_query_impl[816e84f5405d42e3]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[816e84f5405d42e3]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 1usize]>>
30: 0x7ca0489a273d - rustc_query_system[6d3fbfb79102f8b4]::query::plumbing::try_execute_query::<rustc_query_impl[816e84f5405d42e3]::DynamicConfig<rustc_query_system[6d3fbfb79102f8b4]::query::caches::DefaultCache<rustc_span[7ad9000666f00a6a]::def_id::LocalModDefId, rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[816e84f5405d42e3]::plumbing::QueryCtxt, false>
31: 0x7ca0489a24d8 - rustc_query_impl[816e84f5405d42e3]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
32: 0x7ca0480504a4 - rustc_hir_analysis[1b3d93177a6f01f5]::check_crate
33: 0x7ca04866eb4a - rustc_interface[36be2150eace64c8]::passes::run_required_analyses
34: 0x7ca048662ede - rustc_interface[36be2150eace64c8]::passes::analysis
35: 0x7ca048662eaf - rustc_query_impl[816e84f5405d42e3]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[816e84f5405d42e3]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 1usize]>>
36: 0x7ca048dde12e - rustc_query_system[6d3fbfb79102f8b4]::query::plumbing::try_execute_query::<rustc_query_impl[816e84f5405d42e3]::DynamicConfig<rustc_query_system[6d3fbfb79102f8b4]::query::caches::SingleCache<rustc_middle[ab7e7e2c8044b8bd]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[816e84f5405d42e3]::plumbing::QueryCtxt, false>
37: 0x7ca048ddde0e - rustc_query_impl[816e84f5405d42e3]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
38: 0x7ca048d06480 - rustc_interface[36be2150eace64c8]::interface::run_compiler::<core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>, rustc_driver_impl[ccb010d605d21fce]::run_compiler::{closure#0}>::{closure#1}
39: 0x7ca048d4f5e0 - std[a682ed40c8c5f211]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[36be2150eace64c8]::util::run_in_thread_with_globals<rustc_interface[36be2150eace64c8]::util::run_in_thread_pool_with_globals<rustc_interface[36be2150eace64c8]::interface::run_compiler<core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>, rustc_driver_impl[ccb010d605d21fce]::run_compiler::{closure#0}>::{closure#1}, core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>>::{closure#0}, core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>>
40: 0x7ca048d4f2fd - <<std[a682ed40c8c5f211]::thread::Builder>::spawn_unchecked_<rustc_interface[36be2150eace64c8]::util::run_in_thread_with_globals<rustc_interface[36be2150eace64c8]::util::run_in_thread_pool_with_globals<rustc_interface[36be2150eace64c8]::interface::run_compiler<core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>, rustc_driver_impl[ccb010d605d21fce]::run_compiler::{closure#0}>::{closure#1}, core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>>::{closure#0}, core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[d4b455c786e6b169]::result::Result<(), rustc_span[7ad9000666f00a6a]::ErrorGuaranteed>>::{closure#1} as core[d4b455c786e6b169]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x7ca048d4eab9 - std::sys::pal::unix::thread::Thread::new::thread_start::h6a08e28522cda876
42: 0x7ca04a61439d - <unknown>
43: 0x7ca04a69949c - <unknown>
44: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (2d0ea7956 2024-11-20) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [implied_const_bounds] computing the implied `~const` bounds for `Foo3::Baz`
#1 [check_well_formed] checking that `Foo3` is well-formed
end of query stack
error: aborting due to 5 previous errors
Some errors have detailed explanations: E0220, E0601, E0658.
For more information about an error, try `rustc --explain E0220`.
```
</p>
</details>
<!--
query stack:
#0 [implied_const_bounds] computing the implied `~const` bounds for `Foo3::Baz`
#1 [check_well_formed] checking that `Foo3` is well-formed
-->
@rustbot label +F-const_trait_impl
| I-ICE,T-compiler,C-bug,F-const_trait_impl,S-bug-has-test | low | Critical |
2,677,729,599 | electron | [Docs] Page that explains once and for all why a good issue reproduction is necessary | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
I've seen many Github issues recently where the author either
* Put a complicated Github repository as the reproduction,
* Posted small code blocks that weren't executable on their own as the reproduction, or
* Didn't post a reproduction (e.g. because they thought it would be trivial for maintainers to build one in 15 minutes)
This is annoying. It wastes maintainers' time.
Every time, a maintainer will explain over and over again why a repro through Fiddle is necessary for the team to look into the issue.
Often, people will argue back that their reproduction is "simple enough" or something similar.
### Proposed Solution
Write a doc explaining once and for all why a repro through Fiddle is necessary.
Then everyone can link to it when someone doesn't post a reproduction or tries to argue.
@dsanders11 recently put it very nicely: "We're not trying to be difficult, we just have a lot of issues to triage and prioritize those with a minimal repro case."
Points to mention:
* Why provide a good reproduction?
* The Electron team is busy. If you don't supply a simple repro, they won't have time to investigate your issue.
* If you want someone to spend time on *your* issue, make it as easy as possible for them.
* The easier you make it for maintainers to reproduce your issue, the higher the chances that your issue will be solved.
* If you want your issue to be fixed, put in the small amount of time that it takes to make the issue easily reproducible for maintainers.
* In simplifying the issue to a small reproducible example, you might sometimes discover bugs in non-Electron parts of your code that lead to the faulty behavior (instead of it being an Electron issue). Then you'll have a solution for your problem sooner.
* Cloning arbitrary repos might expose maintainers to security exploits / vulnerable code. They don't have time to check multi-file repositories for security vulnerabilities.
* A good reproduction ensures everyone is talking about the same issue.
* Issues that don't provide a repro through Fiddle will be closed.
* More inspiration in the comments to issue 45251 (not linked because I don't want the author to feel bad)
* What is a good reproduction?
* A good reproduction only contains the minimum amount of code necessary to reproduce the issue. Remove any code that is not absolutely necessary to reproduce the issue. (This makes debugging easier as there is less code for which the maintainers have to check whether it causes the issue.)
Below that, I'd add an FAQ-style list of responses to common objections (e.g. "the reproduction is simple enough" or "you're the maintainer and responsible for fixing this—build one yourself").
Afterwards, the link could be placed in the following places:
* In the issue template as a comment
* In the automated message sent when the `blocked/needs-repro` tag is added
### Alternatives Considered
#### 1) Maintainers summarize the same points over and over again.
1) This is time-consuming.
2) Every comment on this topic that's written individually will be worse than a comprehensive doc that provides an exhaustive answer once and for all.
#### 2) The automated message from the bot that is added when the `blocked/needs-repro` tag is added
While this is helpful, it happens very often that people argue with the message, saying that their reproduction is simple enough or similar. Then this wastes maintainers' time again.
I suggest to have an FAQ at the bottom of the document with responses to these common objections.
### Additional Information
**Inspiration:** https://levels.io/contact/ ([archived version](https://web.archive.org/web/20241116010936/https://levels.io/contact/))
**Issues that inspired this suggestion:** 44264, 44690 (not linked because I don't want the authors to feel bad) | enhancement :sparkles: | low | Critical |
2,677,744,068 | pytorch | LoweringException: AttributeError: 'Constant' object has no attribute 'get_name' | ### 🐛 Describe the bug
repros:
```
import torch
from torchvision.models.regnet import regnet_y_8gf
device = "cuda"
model = regnet_y_8gf(weights=None, progress=True, num_classes=256).to(device)
args = (torch.randn(1, 3, 224, 224, device=device),)
_ = model(*args)
ep = torch.export.export(model, args, strict=False)
_ = torch._inductor.aot_compile(ep.module(), args)
```
error:
```
/torch/_inductor/ir.py", line 3524, in realize_into
V.graph.mark_buffer_mutated(dst.get_name())
torch._inductor.exc.LoweringException: AttributeError: 'Constant' object has no attribute 'get_name'
target: aten.copy_.default
args[0]: Constant(dtype=torch.int64, device=device(type='cuda', index=0), value=1)
args[1]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int64,
def inner_fn(index):
tmp0 = ops.constant(2, torch.int64)
return tmp0
,
ranges=[],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
```
### Versions
trunk
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: export | low | Critical |
2,677,760,841 | material-ui | [docs][TypeScript] Add examples of how to extend types from MUI components | ### Related page
https://mui.com/material-ui/
### Kind of issue
Other
### Issue description
Let me start by saying that I am using a translator and I hope my message will be understood.
I have always had problems with creating custom components based on MUI components. The problem is the typing of such components.
List of examples I would like to see:
1. Typing of components that have `component` props.
- I don't understand how to take all the props of let's say `Box` component, add some of my own (mandatory and non-mandatory) props to them. And to add props from `component` when specifying `component`.
2. Typing of components that have `variant` in their generics (for example, if we take `TextFieldProps`).
3. Typing of more complex components such as `Button`.
4. How to type it all if we need to wrap everything in `forwardRef`.
- How to define the element type for `ref` (HTMLDivElement, HTMLButtonElement, etc.).
- It's also worth considering this type if a component is passed in `component`.
5. Also for all this I would like to see an example where we remove some props from the component and/or replace them with our own. For example in a `Chip` component I would like to be able to pass `children` and they would be placed in the props `label`, with the props `label` itself removed from the typing props (I don't know how to explain it).
(Important: The point is not to change the theme of some component, but to add your logic to it.)
If you could add a new page and/or to the component page examples of how to create a component based on that component - that would be awesome.
I realize it would take a lot of time to add this to the documentation, so I would like to see these examples under my Issue.
Thank you for your work and efforts!
### Context
_No response_
**Search keywords**: no | typescript,support: docs-feedback | low | Minor |
2,677,769,282 | tauri | [bug] run android dev error On Win10 | ### Describe the bug
not work
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
npm run tauri android init
npm run tauri android dev
```
### Stack trace
Compiling quote v1.0.36
Compiling libc v0.2.155
Compiling cfg-if v1.0.0
Compiling hashbrown v0.14.5
Compiling rand_core v0.5.1
Compiling rand v0.8.5
Compiling windows-targets v0.52.5
Compiling equivalent v1.0.1
error[E0463]: can't find crate for `core`
|
= note: the `x86_64-linux-android` target may not be installed
= help: consider downloading the target with `rustup target add x86_64-linux-android`
For more information about this error, try `rustc --explain E0463`.
error: could not compile `cfg-if` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
error: could not compile `libc` (lib) due to 1 previous error
error: script "dev" exited with code 255
`Failed to run `cargo build`: command ["cargo", "build", "--package", "mobile-rss", "--manifest-path", "E:\\test\\tauri-app\\src-tauri\\Cargo.toml", "--target", "x86_64-linux-android", "--lib"] exited with code 101
Error `Failed to run `cargo build`: command ["cargo", "build",
### Additional context
_No response_ | type: bug,platform: Windows,status: needs triage | low | Critical |
2,677,835,199 | PowerToys | Power-Toys automatically reported error when turn on my computer | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
[PowerToysReport_2024-11-21-06-06-27.zip](https://github.com/user-attachments/files/17839222/PowerToysReport_2024-11-21-06-06-27.zip)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,677,867,137 | pytorch | DISABLED test_while_loop_schema_gen (__main__.TestHopSchema) | Platforms: asan, linux, rocm, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_while_loop_schema_gen&suite=TestHopSchema&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33289861152).
Over the past 3 hours, it has been determined flaky in 128 workflow(s) with 256 failures and 128 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_while_loop_schema_gen`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `functorch/test_control_flow.py`
cc @clee2000 @zou3519 @Chillee @samdow @kshitij12345 | triaged,module: flaky-tests,skipped,module: functorch | medium | Critical |
2,677,868,148 | kubernetes | Add PermitExtensions in scheduler to control when to admit the Pod binding process | ### What would you like to be added?
Let me introduce our user story first:
we have a model cache platform build on top of kubernetes. In model inference, when a Pod bind nodeName successfully and not started, we need to hang the pod from starting and syncing the model weights the inference pod needed from other nodes. To achieve this, we have to inject an initContainer to block pod starting and admit the initContainer until the model pulling finished.
This is **invasive** because we're modifying the Pod yaml and if the initContainer requires to watch CRDs, we may need to update the roles of the default SA for corresponding permissions, actually that's what we're faceing right now.
So we first think of an idea to block the sandbox creation by kubelet after Pod scheduled successfully, however, this may lead to pod resources reserved in scheduler but not consumed physically if pod hand forever for some reasons.
Then we came with another idea to extend the exist Permit Plugin in scheduler. So right now, permit plugin can return a `Wait` status in scheduling cycle, then in binding cycle, we'll block in `waitOnPermit` until condition ready.
One example is coscheduling plugin, scheduling Pod could be blocked in bindingCycle until all its siblings are ready.
However, based on my understanding, we have limitations here because once Pod is hanging in bindingCycle, we can only activate it in scheduling cycle by iterating over the waiting pods and return `Success` status. What if the Pod could be activated by other crd events rather than a new scheduling process?
So here comes a new proposal, we'll have a new interface `PermitExtensions`:
```
type PermitFn func(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) error
type ClusterEventForPermission struct {
Event ClusterEvent
PermitFn PermitFn
}
type PermitExtensions interface {
Plugin
RegisterPermissionEvents(context.Context) ([]ClusterEventForPermission, error)
}
```
The logic is quite similar to `EnqueueExtension`. First, we'll call the `RegisterPermissionEvents` to register events, and keep watching them in runtime. Once coming events match with the registered ones, the `PermitFn` will iterate over the waitingPods to decide whether Pods are ready or not, once ready, return `Success` status and the bindingCycle will be unblocked.
In our case, we have a CRD named `Torrent` who will record whether the model syncing task is done or not, and once completed, the Permit plugin's PermitFn will be triggered to decide whether we're ready to bind the Pod.
All existing flows will not be broken, no additional changes but offering a knob for users to decide when to admit a pod binding.
~~Then I came up with idea like maybe we can add some gates just like schedulingGates which will block the scheduling process, the new added gates (let's say xGates) will block the kubelet process, like creating the sandbox. Until the Pod holds a nodeName and no xGates exist, the kubelet will do his job.~~
~~The risk is since Pod is scheduled, the resource is reserved, but physically not consumed, so if the postScheduling process blocks, resource is wasted, what we have to do next is recreate the Pod (pod.nodeName is unchangeable now if set, maybe we can unbind the Pod and don't change the nodeName anymore until `nodeName != "" && xGates == []`, this is a big change I think).~~
~~Another approach would be introduce another logic process in scheduler, and hold the bind process until the condition is ready, quite like the DRA, in DRA, we have a lot of interactions between the scheduler and the dra controller, until everything is ready, open fire. DRA is a huge work, I hope not to introduce more CRDs, so maybe we can add new gates like bindingGates here? Extending the schedulingGates is ambiguous I think.~~
Some rough thoughts here, would like to hear advices or suggestions. Thanks in advance!
cc @liggitt @SergeyKanzhelev @kubernetes/sig-scheduling-misc
/kind feature
/sig node
/sig scheduling
### Why is this needed?
Once Pod is assumed to be bind with a node, or already bind with a node, we may have more work to do before pod starting, here we introduce a mechanism to make this happen before pod binding. | sig/scheduling,sig/node,kind/feature,needs-triage,wg/serving | medium | Critical |
2,677,869,856 | pytorch | SEGV in `torch.native_batch_norm` | ### 🐛 Describe the bug
Under specific inputs, `torch.native_batch_norm` triggered a crash.
```python
import torch
input = torch.full((8, 3, 7, 3, 7,), 0, dtype=torch.float32)
weight = None
bias = None
running_mean = None
running_var = None
training = False
momentum = 2.71828
eps = 9.87654e+09
output = torch.native_batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)
```
ASAN report:
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==2761360==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fa9e37f6d2d bp 0x7ffc5f3c1fb0 sp 0x7ffc5f3c1fa0 T0)
==2761360==The signal is caused by a READ memory access.
==2761360==Hint: address points to the zero page.
#0 0x7fa9e37f6d2d in at::TensorAccessor<float const, 1ul, at::DefaultPtrTraits, long>::operator[](long) /mnt/pytorch-2.5.0/aten/src/ATen/core/TensorAccessor.h:104
#1 0x7fa9f5674604 in batch_norm_cpu_collect_linear_and_constant_terms<float, float> /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/batch_norm_kernel.cpp:62
#2 0x7fa9f5644dba in batch_norm_cpu_contiguous_impl<float> /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/batch_norm_kernel.cpp:89
#3 0x7fa9f56315ae in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/batch_norm_kernel.cpp:1333
#4 0x7fa9f5632aa5 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/batch_norm_kernel.cpp:1333
#5 0x7fa9f56358e3 in batch_norm_cpu_kernel /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/batch_norm_kernel.cpp:1333
#6 0x7fa9e3a936c2 in void at::native::DispatchStub<void (*)(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double), at::native::batch_norm_cpu_stub_DECLARE_DISPATCH_type>::operator()<at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool&, double&>(c10::DeviceType, at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool&, double&) /mnt/pytorch-2.5.0/aten/src/ATen/native/DispatchStub.h:233
#7 0x7fa9e3a40c76 in std::tuple<at::Tensor, at::Tensor, at::Tensor> at::native::batch_norm_cpu_transform_input_template<float, float>(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, at::Tensor&) /mnt/pytorch-2.5.0/aten/src/ATen/native/Normalization.cpp:151
#8 0x7fa9e3a1c810 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/Normalization.cpp:775
#9 0x7fa9e3a21376 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/Normalization.cpp:775
#10 0x7fa9e3a22a66 in at::native::batch_norm_cpu_out(at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double, at::Tensor&, at::Tensor&, at::Tensor&) /mnt/pytorch-2.5.0/aten/src/ATen/native/Normalization.cpp:775
#11 0x7fa9e3a25eb0 in at::native::batch_norm_cpu(at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) /mnt/pytorch-2.5.0/aten/src/ATen/native/Normalization.cpp:850
#12 0x7fa9e78dccad in wrapper_CPU__native_batch_norm /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:8834
#13 0x7fa9e7d21e21 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#14 0x7fa9e7d21e21 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#15 0x7fa9e624d370 in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool&&, double&&, double&&) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x14b29370)
#16 0x7fa9e5fc0724 in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::Dispatcher::redispatch<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double>(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double)> const&, c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) const (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x1489c724)
#17 0x7fa9e5cd21dc in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double)>::redispatch(c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#18 0x7fa9e5cd21dc in at::_ops::native_batch_norm::redispatch(c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_1.cpp:4163
#19 0x7fa9ef4c3c60 in at::redispatch::native_batch_norm(c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:5452
#20 0x7fa9ef2c411f in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_1.cpp:11314
#21 0x7fa9ef2c4fab in native_batch_norm /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_1.cpp:11315
#22 0x7fa9ef442f2c in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#23 0x7fa9ef442f2c in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#24 0x7fa9e624d370 in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool&&, double&&, double&&) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x14b29370)
#25 0x7fa9e5cd1823 in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::KernelFunction::call<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#26 0x7fa9e5cd1823 in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::Dispatcher::call<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double>(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double)> const&, at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#27 0x7fa9e5cd1823 in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double)>::call(at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#28 0x7fa9e5cd1823 in at::_ops::native_batch_norm::call(at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_1.cpp:4156
#29 0x7faa2a3d580c in at::native_batch_norm(at::Tensor const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, std::optional<at::Tensor> const&, bool, double, double) (/mnt/pytorch-2.5.0/torch/lib/libtorch_python.so+0x22ec80c)
#30 0x7faa2a2d8b53 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_0.cpp:5324
#31 0x7faa2a2d9703 in THPVariable_native_batch_norm /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_0.cpp:5326
#32 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#33 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#34 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
#35 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#36 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#37 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#38 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#39 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#40 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#41 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#42 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#43 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#44 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#45 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#46 0x7faa32df2d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#47 0x7faa32df2e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#48 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /mnt/pytorch-2.5.0/aten/src/ATen/core/TensorAccessor.h:104 in at::TensorAccessor<float const, 1ul, at::DefaultPtrTraits, long>::operator[](long)
==2761360==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: error checking,triaged,module: sanitizers,module: edge cases | low | Critical |
2,677,904,732 | pytorch | Stack trace from pytest is very far away and far too find on some tests | ### 🐛 Describe the bug
Sample:
```
2024-11-20T18:12:40.4550185Z =================================== FAILURES ===================================
2024-11-20T18:12:40.4550910Z _ DynamicShapesCppWrapperCpuTests.test_linear_with_pointwise_batch_size_384_in_features_196_out_features_385_bias_True_epilogue_hardsigmoid_cpu_bfloat16_dynamic_shapes_cpp_wrapper _
2024-11-20T18:12:40.4551025Z Traceback (most recent call last):
2024-11-20T18:12:40.4551509Z File "/var/lib/jenkins/workspace/test/inductor/test_cpu_select_algorithm.py", line 321, in test_linear_with_pointwise
2024-11-20T18:12:40.4551789Z self.assertEqual(counters["inductor"]["cpp_epilogue_fusion_counter"], 1)
2024-11-20T18:12:40.4552278Z File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/testing/_internal/common_utils.py", line 3977, in assertEqual
2024-11-20T18:12:40.4552413Z raise error_metas.pop()[0].to_error(
2024-11-20T18:12:40.4552546Z AssertionError: Scalars are not equal!
2024-11-20T18:12:40.4552550Z
2024-11-20T18:12:40.4552649Z Expected 1 but got 0.
2024-11-20T18:12:40.4552749Z Absolute difference: 1
2024-11-20T18:12:40.4552866Z Relative difference: 1.0
2024-11-20T18:12:40.4552872Z
2024-11-20T18:12:40.4553073Z To execute this test, run the following from the base repo dir:
2024-11-20T18:12:40.4553991Z python test/inductor/test_cpu_cpp_wrapper.py DynamicShapesCppWrapperCpuTests.test_linear_with_pointwise_batch_size_384_in_features_196_out_features_385_bias_True_epilogue_hardsigmoid_cpu_bfloat16_dynamic_shapes_cpp_wrapper
2024-11-20T18:12:40.4553998Z
2024-11-20T18:12:40.4554243Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2024-11-20T18:12:40.4554452Z ----------------------------- Captured stdout call -----------------------------
2024-11-20T18:12:40.4554546Z inline_call []
2024-11-20T18:12:40.4554706Z stats [('calls_captured', 2), ('unique_graphs', 1)]
2024-11-20T18:12:40.4555855Z inductor [('pattern_matcher_nodes', 8), ('benchmarking.TritonBenchmarker.benchmark_cpu', 3), ('pattern_matcher_count', 2), ('fxgraph_cache_bypass', 1), ('select_algorithm_precompile', 1), ('benchmarking.TritonBenchmarker.benchmark', 1), ('select_algorithm_autotune', 1), ('cpp_epilogue_fusion_counter', 0)]
2024-11-20T18:12:40.4555980Z aot_autograd [('total', 1), ('ok', 1)]
2024-11-20T18:12:40.4556178Z ----------------------------- Captured stderr call -----------------------------
2024-11-20T18:12:40.4556307Z AUTOTUNE linear_unary(384x196, 385x196, 385)
2024-11-20T18:12:40.4556430Z cpp_packed_gemm_0 0.2223 ms 100.0%
2024-11-20T18:12:40.4556542Z _linear_pointwise 650.3270 ms 0.0%
2024-11-20T18:12:40.4556930Z SingleProcess AUTOTUNE benchmarking takes 0.3771 seconds and 3.6158 seconds precompiling for 2 choices
2024-11-20T18:12:40.4557206Z ----------------------------- Captured stdout call -----------------------------
2024-11-20T18:12:40.4557390Z inline_call []
2024-11-20T18:12:40.4557571Z stats [('calls_captured', 2), ('unique_graphs', 1)]
2024-11-20T18:12:40.4557685Z aot_autograd [('total', 1), ('ok', 1)]
2024-11-20T18:12:40.4558845Z inductor [('pattern_matcher_nodes', 8), ('benchmarking.TritonBenchmarker.benchmark_cpu', 3), ('pattern_matcher_count', 2), ('fxgraph_cache_bypass', 1), ('select_algorithm_precompile', 1), ('benchmarking.TritonBenchmarker.benchmark', 1), ('select_algorithm_autotune', 1), ('cpp_epilogue_fusion_counter', 0)]
2024-11-20T18:12:40.4559079Z ----------------------------- Captured stderr call -----------------------------
2024-11-20T18:12:40.4559851Z /opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/utils/_config_module.py:321: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value {}
2024-11-20T18:12:40.4559954Z warnings.warn(
2024-11-20T18:12:40.4560095Z AUTOTUNE linear_unary(384x196, 385x196, 385)
2024-11-20T18:12:40.4560206Z cpp_packed_gemm_1 0.2327 ms 100.0%
2024-11-20T18:12:40.4560319Z _linear_pointwise 646.5265 ms 0.0%
2024-11-20T18:12:40.4560709Z SingleProcess AUTOTUNE benchmarking takes 0.3742 seconds and 3.5428 seconds precompiling for 2 choices
2024-11-20T18:12:40.4560909Z ----------------------------- Captured stdout call -----------------------------
2024-11-20T18:12:40.4561024Z inline_call []
2024-11-20T18:12:40.4561171Z stats [('calls_captured', 2), ('unique_graphs', 1)]
2024-11-20T18:12:40.4561301Z aot_autograd [('total', 1), ('ok', 1)]
2024-11-20T18:12:40.4562456Z inductor [('pattern_matcher_nodes', 8), ('benchmarking.TritonBenchmarker.benchmark_cpu', 3), ('pattern_matcher_count', 2), ('fxgraph_cache_bypass', 1), ('select_algorithm_precompile', 1), ('benchmarking.TritonBenchmarker.benchmark', 1), ('select_algorithm_autotune', 1), ('cpp_epilogue_fusion_counter', 0)]
2024-11-20T18:12:40.4562696Z ----------------------------- Captured stderr call -----------------------------
2024-11-20T18:12:40.4563440Z /opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/utils/_config_module.py:321: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value {}
2024-11-20T18:12:40.4563547Z warnings.warn(
2024-11-20T18:12:40.4563693Z AUTOTUNE linear_unary(384x196, 385x196, 385)
2024-11-20T18:12:40.4563804Z cpp_packed_gemm_2 0.2212 ms 100.0%
2024-11-20T18:12:40.4563931Z _linear_pointwise 345.2910 ms 0.1%
2024-11-20T18:12:40.4564305Z SingleProcess AUTOTUNE benchmarking takes 0.3721 seconds and 3.5511 seconds precompiling for 2 choices
2024-11-20T18:12:40.4565008Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/inductor.test_cpu_cpp_wrapper/inductor.test_cpu_cpp_wrapper-9bab65762a70856a.xml -
2024-11-20T18:12:40.4565168Z =========================== short test summary info ============================
2024-11-20T18:12:40.4566272Z FAILED [12.0949s] inductor/test_cpu_cpp_wrapper.py::DynamicShapesCppWrapperCpuTests::test_linear_with_pointwise_batch_size_384_in_features_196_out_features_385_bias_True_epilogue_hardsigmoid_cpu_bfloat16_dynamic_shapes_cpp_wrapper - AssertionError: Scalars are not equal!
2024-11-20T18:12:40.4566278Z
2024-11-20T18:12:40.4566378Z Expected 1 but got 0.
2024-11-20T18:12:40.4566482Z Absolute difference: 1
2024-11-20T18:12:40.4566601Z Relative difference: 1.0
2024-11-20T18:12:40.4566605Z
2024-11-20T18:12:40.4566805Z To execute this test, run the following from the base repo dir:
2024-11-20T18:12:40.4567726Z python test/inductor/test_cpu_cpp_wrapper.py DynamicShapesCppWrapperCpuTests.test_linear_with_pointwise_batch_size_384_in_features_196_out_features_385_bias_True_epilogue_hardsigmoid_cpu_bfloat16_dynamic_shapes_cpp_wrapper
2024-11-20T18:12:40.4567731Z
2024-11-20T18:12:40.4567974Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
I'm not sure why there's so many captured stdout/stderr entries here, but they push the backtrace far far away.
### Versions
main
cc @seemethere @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu | module: ci,triaged,oncall: pt2,module: log classifier | low | Critical |
2,677,917,798 | ui | [bug]: Dropdown menu content & Dropdown menu item having properties error | ### Describe the bug
Type '{ children: Element[]; align: string; }' is not assignable to type 'IntrinsicAttributes & RefAttributes<never>'.
Property 'children' does not exist on type 'IntrinsicAttributes & RefAttributes<never>'.
### Affected component/components
Dropdown Menu
### How to reproduce
1. Install the Dropdown menu component in NextJs 15 with React 19 (19.0.0-rc-66855b96-20241106) project.
2. Install the next theme and provide a theme provider.
3. Create a mode toggle given by shadcn UI library and then it show an error on Dropdown menu content and Dropdown menu item.
4. When you open "/components/ui/dropdown-menu.tsx" then it also show some other type errors.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
│ > frontend@0.1.0 build
│ > next build
│
│ ▲ Next.js 15.0.3
│
│ Creating an optimized production build ...
│ ✓ Compiled successfully
│ Linting and checking validity of types .Failed to comp
│ ile.
│
│ ./src/components/mode-toggle.tsx:27:14
│ Type error: Type '{ children: Element[]; align: string; }'
│ is not assignable to type 'IntrinsicAttributes & RefAttri
│ butes<never>'.
│ Property 'children' does not exist on type 'IntrinsicAtt
│ ributes & RefAttributes<never>'.
│
│ 25 | </Button>
│ 26 | </DropdownMenuTrigger>
│ > 27 | <DropdownMenuContent align="end">
│ | ^
│ 28 | <DropdownMenuItem onClick={() => se
│ tTheme("light")}>
│ 29 | Light
│ 30 | </DropdownMenuItem>
│ npm error Lifecycle script `build` failed with error:
│ npm error code 1
│ npm error path /Users/rohanchaudhary/Desktop/bondra/apps/f
│ rontend
│ npm error workspace frontend@0.1.0
│ npm error location /Users/rohanchaudhary/Desktop/bondra/ap
│ ps/frontend
│ npm error command failed
│ npm error command sh -c next build
│
│ command finished with error: command (/Users/rohanchaudhar
│ y/Desktop/bondra/apps/frontend) /opt/homebrew/bin/npm run
│ build exited (1)
```
### System Info
```bash
Browser: Brave (Version 1.73.91)
Code Editor: Visual Studio Code (Version 1.95)
System: Mac OS Sequoia (Version 15.1)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,678,023,592 | svelte | Fetching an object from a SveltMap, modifying it, then setting it back does not have obvious behavior | ### Describe the bug
For `$state([])` you can get an object by index, change it and then set it back and this reactively change. My assumption was that the same behavior would work for a `$state(new SvelteMap())` which does not seem to be the case. If you get a value that is an object, modify it and then set it back no reactivity seems to happen. If I set a new object with the same key, things to reactively change. Refer to the playground link for an example.
### Reproduction
https://svelte.dev/playground/a1144052a70346f9bced094f4d795823?version=5.2.7
### Logs
_No response_
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 24.04.1 LTS 24.04.1 LTS (Noble Numbat)
CPU: (12) arm64 unknown
Memory: 17.04 GB / 31.17 GB
Container: Yes
Shell: 5.2.21 - /bin/bash
Binaries:
Node: 18.19.1 - /usr/bin/node
npm: 9.2.0 - /usr/bin/npm
pnpm: 9.12.3 - ~/.local/share/pnpm/pnpm
Browsers:
Chromium: 130.0.6723.116
npmPackages:
svelte: ^5.2.7 => 5.2.7
```
### Severity
annoyance | documentation | low | Critical |
2,678,065,070 | flutter | iOS - The app crashes when entering data into an input field in the WebView | ### Steps to reproduce
I have an issue on iOS when using webview_flutter. After users tap on an input field in the WebView and start typing, the app crashes. Currently, TestFlight provides a crash file, which I have mentioned below.This issue occurs randomly on some devices.
This issue occurs when I use Flutter 3.22.3 and webview_flutter: 4.8.0.
### Expected results
No crash happen
### Actual results
Crashed report from tesflight
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:convert';
import 'dart:io';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:webview_flutter/webview_flutter.dart';
import 'package:webview_flutter_android/webview_flutter_android.dart' as webview_flutter_android;
void main() {
WidgetsFlutterBinding.ensureInitialized();
WebViewController controllerWebView = WebViewController()
..setJavaScriptMode(JavaScriptMode.unrestricted)
..setBackgroundColor(const Color(0x00000000))
..setNavigationDelegate(
NavigationDelegate(
onProgress: (int progress) {
// Update loading bar.
},
onPageStarted: (String url) {},
onPageFinished: (String url) {},
onWebResourceError: (WebResourceError error) {},
),
)
..addJavaScriptChannel(
'messageChannelHandler',
onMessageReceived: (message) {
if (kDebugMode) {
print(
'post message messageChannelHandler : ${message.message} ');
}
var result = jsonDecode(message.message);
if (result['role'] == 'signer' &&
result['status'] == 'success' &&
result['code'] == '0') {
} else {}
},
)
..loadRequest(Uri.parse('https://demo.econtract.kyta.fpt.com/signing/signature-process/00001049iMaDHdkBy5A9NuMdd/p_002_r_001?jwt=eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJwXzAwMl9yXzAwMSIsImlhdCI6MTczMjA4NTgwNywiZW52ZWxvcGVJZCI6IjAwMDAxMDQ5aU1hREhka0J5NUE5TnVNZGQiLCJwYXJ0eUlkIjoicF8wMDIiLCJwYXJ0eUluZGV4IjoxLCJyZWNpcGllbnRJZCI6InBfMDAyX3JfMDAxIiwicmVjaXBpZW50SW5kZXgiOjAsInJlY2lwaWVudEVNYWlsIjoibnNuZGJkYmRiQGFhLm5uIiwicGVybSI6InZfZCIsInNlc3Npb25JZCI6InpQVzRRYTdTeVUzcFVnMG1XVmJtbk9oSXFGOCIsImV4cCI6MTczMjA4NzYwN30.65pFvApiycSjm-nwlU9pqr_PhPJMlk4l7sowKAP6SYotgOua9qVa_hSH_UhXVFgrh1nxcpdFEevyS5wzKI5gFA'));
if (Platform.isAndroid) {
final myAndroidController = controllerWebView.platform as webview_flutter_android.AndroidWebViewController;
myAndroidController.setTextZoom(100);
}
runApp(MaterialApp(
home: Padding(padding: const EdgeInsets.only(bottom: 20), child: (){
if(Platform.isAndroid){
return Scaffold(
resizeToAvoidBottomInset: true,
body: WebViewWidget(controller: controllerWebView),
);
}
return WebViewWidget(controller: controllerWebView);
}()),
));
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/ce10fa92-0b17-44f7-af9d-7814f0c9afdd
</details>
### Logs
<details open><summary>Logs</summary>
```console
Incident Identifier: 5B294248-5253-43FD-9D00-3F0E7BC8FC26
Distributor ID: com.apple.TestFlight
Hardware Model: iPhone14,7
Process: Runner [810]
Path: /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Runner
Identifier:
Version: 1.0.1 (7)
AppStoreTools: 16B39
AppVariant: 1:iPhone14,7:17.4
Beta: YES
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: [909]
Date/Time: 2024-11-08 17:17:58.5890 +0700
Launch Time: 2024-11-08 17:16:43.5629 +0700
OS Version: iPhone OS 17.6.1 (21G93)
Release Type: User
Baseband Version: 2.60.02
Report Version: 104
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x000083fda9027bfc -> 0x0000007da9027bfc (possible pointer authentication failure)
Exception Codes: 0x0000000000000001, 0x000083fda9027bfc
VM Region Info: 0x7da9027bfc is not in any region. Bytes after previous region: 58670087165
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
commpage (reserved) 1000000000-7000000000 [384.0G] ---/--- SM=NUL reserved VM address space (unallocated)
--->
UNUSED SPACE AT END
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [810]
Triggered by Thread: 0
Thread 0 Crashed:
0 libobjc.A.dylib 0x000000018356fcac objc_msgSend + 172 (:-1)
1 UIKit 0x0000000228a1c450 __65+[UIFocusRingManagerAccessibility moveRingToFocusItem:forClient:]_block_invoke + 68 (UIFocusRingManagerAccessibility.m:237)
2 UIAccessibility 0x00000001a6d0e690 -[NSObject(AXPrivCategory) accessibilityEnumerateAncestorsUsingBlock:] + 76 (NSObjectAccessibility.m:11352)
3 UIKit 0x0000000228a1c2d0 +[UIFocusRingManagerAccessibility moveRingToFocusItem:forClient:] + 324 (UIFocusRingManagerAccessibility.m:236)
4 UIKit 0x0000000228acc38c __88-[UITextInputUIResponderAccessibility _axDrawFocusRingAroundFirstResponderAndMoveFocus:]_block_invoke + 348 (UITextInputUIResponderAccessibility.m:974)
5 libdispatch.dylib 0x000000019359d13c _dispatch_call_block_and_release + 32 (init.c:1530)
6 libdispatch.dylib 0x000000019359edd4 _dispatch_client_callout + 20 (object.m:576)
7 libdispatch.dylib 0x00000001935ad5a4 _dispatch_main_queue_drain + 988 (queue.c:7898)
8 libdispatch.dylib 0x00000001935ad1b8 _dispatch_main_queue_callback_4CF + 44 (queue.c:8058)
9 CoreFoundation 0x000000018b6cb710 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 (CFRunLoop.c:1780)
10 CoreFoundation 0x000000018b6c8914 __CFRunLoopRun + 1996 (CFRunLoop.c:3149)
11 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
12 GraphicsServices 0x00000001d01151a8 GSEventRunModal + 164 (GSEvent.c:2196)
13 UIKitCore 0x000000018dd01ae8 -[UIApplication _run] + 888 (UIApplication.m:3713)
14 UIKitCore 0x000000018ddb5d98 UIApplicationMain + 340 (UIApplication.m:5303)
15 Runner 0x00000001005a4c9c main + 64 (AppDelegate.swift:6)
16 dyld 0x00000001aee9f154 start + 2356 (dyldMain.cpp:1298)
Thread 1:
0 libsystem_pthread.dylib 0x00000001e81450c4 start_wqthread + 0 (:-1)
Thread 2:
0 libsystem_kernel.dylib 0x00000001d434d6c8 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001d4350ec8 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001d4350de0 mach_msg_overwrite + 436 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001d4350c20 mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000018b6c8f5c __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2624)
5 CoreFoundation 0x000000018b6c8600 __CFRunLoopRun + 1208 (CFRunLoop.c:3007)
6 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
7 Foundation 0x000000018a5e8b5c -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 (NSRunLoop.m:373)
8 Foundation 0x000000018a5e89ac -[NSRunLoop(NSRunLoop) runUntilDate:] + 64 (NSRunLoop.m:420)
9 UIKitCore 0x000000018dd1581c -[UIEventFetcher threadMain] + 420 (UIEventFetcher.m:1207)
10 Foundation 0x000000018a5ff428 __NSThread__start__ + 732 (NSThread.m:991)
11 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
12 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 3:
0 libsystem_kernel.dylib 0x00000001d434d6c8 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001d4350ec8 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001d4350de0 mach_msg_overwrite + 436 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001d4350c20 mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000018b6c8f5c __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2624)
5 CoreFoundation 0x000000018b6c8600 __CFRunLoopRun + 1208 (CFRunLoop.c:3007)
6 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
7 Flutter 0x00000001036eeae0 0x103400000 + 3074784
8 Flutter 0x00000001036ee728 0x103400000 + 3073832
9 Flutter 0x00000001036ee438 0x103400000 + 3073080
10 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
11 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 4:
0 libsystem_kernel.dylib 0x00000001d434d6c8 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001d4350ec8 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001d4350de0 mach_msg_overwrite + 436 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001d4350c20 mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000018b6c8f5c __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2624)
5 CoreFoundation 0x000000018b6c8600 __CFRunLoopRun + 1208 (CFRunLoop.c:3007)
6 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
7 Flutter 0x00000001036eeae0 0x103400000 + 3074784
8 Flutter 0x00000001036ee728 0x103400000 + 3073832
9 Flutter 0x00000001036ee438 0x103400000 + 3073080
10 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
11 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 5:
0 libsystem_kernel.dylib 0x00000001d434d6c8 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001d4350ec8 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001d4350de0 mach_msg_overwrite + 436 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001d4350c20 mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000018b6c8f5c __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2624)
5 CoreFoundation 0x000000018b6c8600 __CFRunLoopRun + 1208 (CFRunLoop.c:3007)
6 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
7 Flutter 0x00000001036eeae0 0x103400000 + 3074784
8 Flutter 0x00000001036ee728 0x103400000 + 3073832
9 Flutter 0x00000001036ee438 0x103400000 + 3073080
10 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
11 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 6:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 Flutter 0x0000000103458df4 0x103400000 + 364020
3 Flutter 0x00000001036e84d0 0x103400000 + 3048656
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 7:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 Flutter 0x0000000103458df4 0x103400000 + 364020
3 Flutter 0x00000001036e84d0 0x103400000 + 3048656
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 8:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 Flutter 0x0000000103458df4 0x103400000 + 364020
3 Flutter 0x00000001036e84d0 0x103400000 + 3048656
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 9:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 Flutter 0x0000000103458df4 0x103400000 + 364020
3 Flutter 0x00000001036e84d0 0x103400000 + 3048656
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 10:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 Flutter 0x0000000103458df4 0x103400000 + 364020
3 Flutter 0x00000001036e84d0 0x103400000 + 3048656
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 11:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 Flutter 0x0000000103458df4 0x103400000 + 364020
3 Flutter 0x00000001036e84d0 0x103400000 + 3048656
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 12:
0 libsystem_kernel.dylib 0x00000001d43544c8 kevent + 8 (:-1)
1 Flutter 0x000000010397de98 0x103400000 + 5758616
2 Flutter 0x00000001039a92f0 0x103400000 + 5935856
3 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
4 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 13:
0 libsystem_kernel.dylib 0x00000001d434d6c8 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001d4350ec8 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001d4350de0 mach_msg_overwrite + 436 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001d4350c20 mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000018b6c8f5c __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2624)
5 CoreFoundation 0x000000018b6c8600 __CFRunLoopRun + 1208 (CFRunLoop.c:3007)
6 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
7 CFNetwork 0x000000018c8a8c7c +[__CFN_CoreSchedulingSetRunnable _run:] + 384 (CoreSchedulingSet.mm:1473)
8 Foundation 0x000000018a5ff428 __NSThread__start__ + 732 (NSThread.m:991)
9 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
10 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 14:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e8147710 _pthread_cond_wait + 1272 (pthread_cond.c:862)
2 Flutter 0x0000000103a7121c 0x103400000 + 6754844
3 Flutter 0x0000000103aaccac 0x103400000 + 6999212
4 Flutter 0x0000000103a70be8 0x103400000 + 6753256
5 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
6 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 15:
0 libsystem_kernel.dylib 0x00000001d434d644 semaphore_wait_trap + 8 (:-1)
1 caulk 0x00000001fd4e3724 caulk::semaphore::timed_wait(double) + 212 (semaphore.cpp:98)
2 caulk 0x00000001fd4e35e4 caulk::concurrent::details::worker_thread::run() + 36 (messenger.cpp:234)
3 caulk 0x00000001fd4e352c void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*>>>(void*) + 96 (thread.h:189)
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 16:
0 libsystem_kernel.dylib 0x00000001d434d644 semaphore_wait_trap + 8 (:-1)
1 caulk 0x00000001fd4e3724 caulk::semaphore::timed_wait(double) + 212 (semaphore.cpp:98)
2 caulk 0x00000001fd4e35e4 caulk::concurrent::details::worker_thread::run() + 36 (messenger.cpp:234)
3 caulk 0x00000001fd4e352c void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*>>>(void*) + 96 (thread.h:189)
4 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
5 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 17:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e81476e4 _pthread_cond_wait + 1228 (pthread_cond.c:862)
2 JavaScriptCore 0x00000001a2d941e0 scavenger_thread_main + 1316 (pas_scavenger.c:359)
3 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
4 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 18:
0 libsystem_kernel.dylib 0x00000001d434d6c8 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001d4350ec8 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001d4350de0 mach_msg_overwrite + 436 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001d4350c20 mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000018b6c8f5c __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2624)
5 CoreFoundation 0x000000018b6c8600 __CFRunLoopRun + 1208 (CFRunLoop.c:3007)
6 CoreFoundation 0x000000018b6c7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
7 CoreFoundation 0x000000018b735f04 CFRunLoopRun + 64 (CFRunLoop.c:3446)
8 CoreMotion 0x00000001983dce3c CLMotionCore::runMotionThread(void*) + 1292 (CLMotionCore.mm:376)
9 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
10 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 19:
0 libsystem_pthread.dylib 0x00000001e81450c4 start_wqthread + 0 (:-1)
Thread 20:
0 libsystem_pthread.dylib 0x00000001e81450c4 start_wqthread + 0 (:-1)
Thread 21:
0 libsystem_pthread.dylib 0x00000001e81450c4 start_wqthread + 0 (:-1)
Thread 22:
0 libsystem_kernel.dylib 0x00000001d435308c __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e8147710 _pthread_cond_wait + 1272 (pthread_cond.c:862)
2 Flutter 0x0000000103a7121c 0x103400000 + 6754844
3 Flutter 0x0000000103aaccac 0x103400000 + 6999212
4 Flutter 0x0000000103a70be8 0x103400000 + 6753256
5 libsystem_pthread.dylib 0x00000001e814a06c _pthread_start + 136 (pthread.c:931)
6 libsystem_pthread.dylib 0x00000001e81450d8 thread_start + 8 (:-1)
Thread 0 crashed with ARM Thread State (64-bit):
x0: 0x0000000133fd2470 x1: 0x000000018686848c x2: 0x000000016f86265f x3: 0xfffffffecc02db90
x4: 0x0000000000000005 x5: 0x00000000000012d0 x6: 0x000000000000003e x7: 0x0000000000000000
x8: 0x0000000000000103 x9: 0x0000000000000000 x10: 0x000083fda9027bfc x11: 0x000000000000003f
x12: 0x00000000019f9801 x13: 0x0000000000000002 x14: 0x000000018d25690c x15: 0x000000018d256908
x16: 0x000000018d256908 x17: 0x0000000000000009 x18: 0x0000000000000000 x19: 0x000000016f8626e8
x20: 0x0000000133fd2470 x21: 0x0000000133fd2470 x22: 0x000000016f8626f8 x23: 0x0000000000000114
x24: 0x0000000000000000 x25: 0x00000001ec00ffa0 x26: 0x0000000303f7cb80 x27: 0x000000000000000f
x28: 0x0000000000000000 fp: 0x000000016f862640 lr: 0x0000000228a1c450
sp: 0x000000016f862600 pc: 0x000000018356fcac cpsr: 0x20001000
esr: 0x92000004 (Data Abort) byte read Translation fault
Binary Images:
0x10059c000 - 0x1005b7fff Runner arm64 <6ea6d5ab33183e51a9384eb2b2ca123f> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Runner
0x100670000 - 0x1006cffff DKImagePickerController arm64 <1eb78868922538398886606e09412bad> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/DKImagePickerController.framework/DKImagePickerController
0x10078c000 - 0x1007d7fff DKPhotoGallery arm64 <9975adb589eb3e40a4b33b237b22fc82> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/DKPhotoGallery.framework/DKPhotoGallery
0x100864000 - 0x100873fff FBLPromises arm64 <cfeef8651787323c896071a083b37d84> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/FBLPromises.framework/FBLPromises
0x100894000 - 0x1008a7fff FirebaseCore arm64 <cf60eedae9753cbb9f4bcbb3e30398f9> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/FirebaseCore.framework/FirebaseCore
0x1008c8000 - 0x1008cffff SwiftTryCatch arm64 <2e6072b1a93f37c69db420d11154f720> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/SwiftTryCatch.framework/SwiftTryCatch
0x1008e0000 - 0x1008ebfff Toast arm64 <752a4683c1fb3e628a2fcb41280ba92e> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/Toast.framework/Toast
0x1009e4000 - 0x1009effff libobjc-trampolines.dylib arm64e <be553713db163c12aaa48fd6211e48ce> /private/preboot/Cryptexes/OS/usr/lib/libobjc-trampolines.dylib
0x100d98000 - 0x100db3fff FirebaseCoreInternal arm64 <c0cd4af8e15533eaaa92954ae83bde46> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/FirebaseCoreInternal.framework/FirebaseCoreInternal
0x100df0000 - 0x100e07fff FirebaseInstallations arm64 <c9d578a7ed68379c9aef59d3c9830fe1> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/FirebaseInstallations.framework/FirebaseInstallations
0x100e38000 - 0x100e6bfff FirebaseMessaging arm64 <6e759b3810643bde8770ed52f53dfed8> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/FirebaseMessaging.framework/FirebaseMessaging
0x100ebc000 - 0x100ee3fff GoogleDataTransport arm64 <4f936f76a36b3aaf9ae781d7560d1fdd> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/GoogleDataTransport.framework/GoogleDataTransport
0x100f20000 - 0x100f3ffff GoogleUtilities arm64 <33e90c90b21e30679691a18a235ec63e> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/GoogleUtilities.framework/GoogleUtilities
0x100f6c000 - 0x101037fff Liveness arm64 <2ef7f11dbfe1335480571169a616fc59> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/Liveness.framework/Liveness
0x101098000 - 0x10232ffff OCR arm64 <678379fe70e339368e66817625d91aa5> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/OCR.framework/OCR
0x102e60000 - 0x102ebbfff PostHog arm64 <a5257ffe4a463178a7ba7473e07f9f7a> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/PostHog.framework/PostHog
0x102f4c000 - 0x102f9ffff SDWebImage arm64 <1ffd8d1e7a96325ea49866aa9f448da2> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/SDWebImage.framework/SDWebImage
0x103028000 - 0x10303ffff SwiftyGif arm64 <fee4c1c7e88d3a8f96a88c3928d6d8e9> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/SwiftyGif.framework/SwiftyGif
0x103068000 - 0x103073fff app_links arm64 <af93c87c7e6f33479ef5477d975e2417> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/app_links.framework/app_links
0x103090000 - 0x103097fff device_info_plus arm64 <263d2cc40b153e009d5c0ade4c28fc34> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/device_info_plus.framework/device_info_plus
0x1030a8000 - 0x1030b7fff ekyc_plugin_flutter arm64 <3d2ac1bd811c3cde8e68256121ad0f9b> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/ekyc_plugin_flutter.framework/ekyc_plugin_flutter
0x1030d4000 - 0x1030e3fff file_picker arm64 <28ee77e38e7d37ff82c728f9caf9eb3c> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/file_picker.framework/file_picker
0x1030fc000 - 0x10310ffff flutter_downloader arm64 <8c126db38d8234b08c9ce7cf8e9804ee> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/flutter_downloader.framework/flutter_downloader
0x10312c000 - 0x103137fff flutter_dynamic_icon_plus arm64 <87607b1d141c37c4985d568a580fd5ec> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/flutter_dynamic_icon_plus.framework/flutter_dynamic_icon_plus
0x103150000 - 0x10315bfff flutter_localization arm64 <63132b420d2d3298831e9c8b42afb5c0> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/flutter_localization.framework/flutter_localization
0x103170000 - 0x103177fff flutter_native_splash arm64 <3cb0f1d482e83de081f834aa25fb70c0> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/flutter_native_splash.framework/flutter_native_splash
0x103188000 - 0x10318ffff fluttertoast arm64 <b16458cd0df338e1861d997779eb6935> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/fluttertoast.framework/fluttertoast
0x1031a4000 - 0x1031b3fff image_gallery_saver arm64 <1dc1e4c961653e6ba1b4cd3169379181> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/image_gallery_saver.framework/image_gallery_saver
0x1031d4000 - 0x1031e7fff image_picker_ios arm64 <c642cf1b990b399f96889f9a6d8c3308> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/image_picker_ios.framework/image_picker_ios
0x103208000 - 0x103213fff local_auth_darwin arm64 <bc97c0ca404734ecb7ddc9fd04db749b> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/local_auth_darwin.framework/local_auth_darwin
0x10322c000 - 0x103233fff nanopb arm64 <22f3b5aae2373c8e828cf92c10537296> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/nanopb.framework/nanopb
0x103244000 - 0x10324ffff path_provider_foundation arm64 <dca514271bea32c69473d8ab44b71eb7> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/path_provider_foundation.framework/path_provider_foundation
0x103268000 - 0x103277fff posthog_flutter arm64 <f9cb944b31353ce7bd233d8224878308> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/posthog_flutter.framework/posthog_flutter
0x103290000 - 0x10329bfff share_plus arm64 <692cb180d1c838fba5cba0a19248b4a9> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/share_plus.framework/share_plus
0x1032b0000 - 0x1032c3fff shared_preferences_foundation arm64 <ad7ab7e61eba3d52853959eac68347a3> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/shared_preferences_foundation.framework/shared_preferences_foundation
0x1032e4000 - 0x1032effff smart_auth arm64 <85a410b0b0003f839b96fa8854fc964f> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/smart_auth.framework/smart_auth
0x103304000 - 0x10331bfff sqflite arm64 <29ffac83ac713d9483fdf2f90708b89a> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/sqflite.framework/sqflite
0x103348000 - 0x103357fff url_launcher_ios arm64 <49ce236058293121abb7141c4b20fc56> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/url_launcher_ios.framework/url_launcher_ios
0x103378000 - 0x1033a3fff webview_flutter_wkwebview arm64 <fc5779944da33617aa0e1d0612983837> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/webview_flutter_wkwebview.framework/webview_flutter_wkwebview
0x103400000 - 0x103c7ffff Flutter arm64 <4c4c44dc55553144a1b8f18f0c0ee878> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/Flutter.framework/Flutter
0x106774000 - 0x107173fff App arm64 <72313c67b72839f9aa4bed9a2fafac04> /private/var/containers/Bundle/Application/6320945C-940C-4552-8F16-52F14138F96F/Runner.app/Frameworks/App.framework/App
0x18356c000 - 0x1835bccf3 libobjc.A.dylib arm64e <afdf5874bc3b388e864cdc9f4cdbf4f0> /usr/lib/libobjc.A.dylib
0x18a521000 - 0x18b096fff Foundation arm64e <d27a6ec5943c3b0e8d158840fd2914f0> /System/Library/Frameworks/Foundation.framework/Foundation
0x18b675000 - 0x18bba2fff CoreFoundation arm64e <76a3b1983c09323e83590d4978e156f5> /System/Library/Frameworks/CoreFoundation.framework/CoreFoundation
0x18c7ab000 - 0x18cb87fff CFNetwork arm64e <371394cd79f23216acb0a159c09c668d> /System/Library/Frameworks/CFNetwork.framework/CFNetwork
0x18d24e000 - 0x18d8f6fff CoreGraphics arm64e <f8b5c4f3565f328db62ee1df1c7b3bd8> /System/Library/Frameworks/CoreGraphics.framework/CoreGraphics
0x18d8f7000 - 0x18f418fff UIKitCore arm64e <9da0d27355063712b73de0149d74c13c> /System/Library/PrivateFrameworks/UIKitCore.framework/UIKitCore
0x19359b000 - 0x1935e1fff libdispatch.dylib arm64e <5f66cdb608a936158c6a4e3b47005495> /usr/lib/system/libdispatch.dylib
0x1935e2000 - 0x19365fff3 libsystem_c.dylib arm64e <7135c2c8ba5836368b46a9e6226ead45> /usr/lib/system/libsystem_c.dylib
0x1983cd000 - 0x19889bfff CoreMotion arm64e <5d6e7429116638b3807bdfad246f9132> /System/Library/Frameworks/CoreMotion.framework/CoreMotion
0x1a182d000 - 0x1a2f69f3f JavaScriptCore arm64e <2800076a7d5a38dcafa723fa080301b6> /System/Library/Frameworks/JavaScriptCore.framework/JavaScriptCore
0x1a6cd9000 - 0x1a6d78fff UIAccessibility arm64e <3e46e17f4d0133f4932de46672752811> /System/Library/PrivateFrameworks/UIAccessibility.framework/UIAccessibility
0x1aee62000 - 0x1aeeef937 dyld arm64e <52039c944da13638bd52020a0b5fa399> /usr/lib/dyld
0x1d0114000 - 0x1d011cfff GraphicsServices arm64e <3ebbd576e7d83f69bcb5b9810ddcc90e> /System/Library/PrivateFrameworks/GraphicsServices.framework/GraphicsServices
0x1d434c000 - 0x1d4385fef libsystem_kernel.dylib arm64e <21ee5290d1193c31b948431865a67738> /usr/lib/system/libsystem_kernel.dylib
0x1e8144000 - 0x1e8150ff3 libsystem_pthread.dylib arm64e <e4a9d6dbf93b3c88bdd185671ec22e2b> /usr/lib/system/libsystem_pthread.dylib
0x1fd4db000 - 0x1fd504fff caulk arm64e <e5c09db9103f38c798201fb237c06434> /System/Library/PrivateFrameworks/caulk.framework/caulk
0x1fffad000 - 0x200007fff CorePrediction arm64e <edce0070f20d3715952fd3f4e5466af4> /System/Library/PrivateFrameworks/CorePrediction.framework/CorePrediction
0x22899d000 - 0x228b6bfff UIKit arm64e <8b499aa3ee523523a24b3cd805d3397f> /System/Library/AccessibilityBundles/UIKit.axbundle/UIKit
EOF
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.22.3, on macOS 14.5 23F79 darwin-arm64, locale en-GB)
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.95.3)
[✓] Connected device (6 available)
! Error: Browsing on the local area network for Sumi. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for Panda. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• No issues found!
```
</details>
| a: text input,c: crash,platform-ios,p: webview,package,a: release,team-ios,fyi-text-input | medium | Critical |
2,678,140,546 | ant-design | StyleProvider CSS 逻辑属性 降级问题?有没有办法,降级 DatePicker 弹出你们antd自带的内联样式: inset | ### What problem does this feature solve?
https://ant.design/docs/react/compatible-style-cn
### What does the proposed API look like?
import { legacyLogicalPropertiesTransformer, StyleProvider } from '@ant-design/cssinjs';
// `transformers` 提供预处理功能将样式进行转换
export default () => (
<StyleProvider transformers={[legacyLogicalPropertiesTransformer]}. .....>
<MyApp />
</StyleProvider>
);
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | improvement | low | Major |
2,678,164,353 | vscode | Can't launch additional windows via `code` from cli. | The Preview On Github button from the built-in issue report window wasn't working so I manually copied the info over. Sorry if the formatting is not exactly as expected.
Does this issue occur when all extensions are disabled?: Yes
- VS Code Version:
- Version: 1.95.3
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Linux x64 6.11.9-arch1-1
- OS Version: Linux x64 6.11.9-arch1-1
Steps to Reproduce:
1. Open project from terminal. (logs posted below)
``` zsh
> code --disable-extensions --verbose ./tmp > /data/vscode_logs/1.txt
```
VSCode opens as expected.
2. Attempt to open another project from terminal, while first window is still open.
```zsh
> code --disable-extensions --verbose ./tmp2 > /data/vscode_logs/2.txt
```
Nothing happens, no new window. This has been happening 100% of the time I attempt this. Launching a new window through the GUI works as expected.
Here are the log files. The first one is the logs written from the start to end time of the second instance.
1.txt
```
[90m[main 2024-11-21T05:55:22.672Z][0m Received data from other instance: {
_: [ '/data/tmp2' ],
diff: false,
merge: false,
add: false,
goto: false,
'new-window': true,
'reuse-window': false,
wait: false,
help: false,
'list-extensions': false,
'show-versions': false,
'pre-release': false,
'update-extensions': false,
version: false,
verbose: true,
status: false,
'prof-startup': false,
'no-cached-data': false,
'prof-v8-extensions': false,
'disable-extensions': true,
'disable-lcd-text': false,
'disable-gpu': true,
'disable-chromium-sandbox': false,
sandbox: false,
telemetry: false,
extensionDevelopmentPath: [ '$HOME/.vscode-oss/extensions/undefined_publisher.wallbash-0.0.1' ],
debugRenderer: false,
'enable-smoke-test-driver': false,
logExtensionHostCommunication: false,
'skip-release-notes': false,
'skip-welcome': false,
'disable-telemetry': false,
'disable-updates': false,
'use-inmemory-secretstorage': false,
'disable-workspace-trust': false,
'disable-crash-reporter': false,
'crash-reporter-id': 'ad189793-6a94-4d8f-b77d-c33db011f329',
'skip-add-to-recently-opened': false,
'open-url': false,
'file-write': false,
'file-chmod': false,
force: false,
'do-not-sync': false,
trace: false,
'preserve-env': false,
'force-user-env': false,
'force-disable-user-env': false,
'open-devtools': false,
'disable-gpu-sandbox': false,
'__enable-file-policy': false,
'enable-coi': false,
'no-proxy-server': false,
'no-sandbox': false,
nolazy: false,
'force-renderer-accessibility': false,
'ignore-certificate-errors': false,
'allow-insecure-localhost': false,
'disable-dev-shm-usage': false,
'profile-temp': false,
logsPath: '/home/null/.config/Code/logs/20241121T165522'
} {
SHELL: '/usr/bin/zsh',
LSCOLORS: 'Gxfxcxdxbxegedabagacad',
ZLE_RPROMPT_INDENT: '0',
COLORTERM: 'truecolor',
HYPRLAND_CMD: 'Hyprland',
LESS: '-R',
XDG_SESSION_PATH: '/org/freedesktop/DisplayManager/Session2',
NVM_INC: '/home/null/.nvm/versions/node/v22.4.1/include/node',
XDG_BACKEND: 'wayland',
QT_WAYLAND_DISABLE_WINDOWDECORATION: '1',
LIBVA_DRIVER_NAME: 'nvidia',
DESKTOP_SESSION: 'hyprland',
LC_MONETARY: 'en_AU.UTF-8',
HL_INITIAL_WORKSPACE_TOKEN: 'a9ee9444-c580-410f-8435-96509d0f53df',
KITTY_PID: '29933',
XDG_SEAT: 'seat0',
PWD: '/data',
LOGNAME: 'null',
XDG_SESSION_DESKTOP: 'Hyprland',
QT_QPA_PLATFORMTHEME: 'qt6ct',
XDG_SESSION_TYPE: 'wayland',
KITTY_PUBLIC_KEY: '1:Jz3f5y=bx^L*bH_u1s8B_MggZ9cykC07>|OK3pLv',
POSH_SHELL: 'zsh',
MOTD_SHOWN: 'pam',
HOME: '/home/null',
LC_PAPER: 'en_AU.UTF-8',
LANG: 'C.UTF-8',
__GL_VRR_ALLOWED: '1',
_JAVA_AWT_WM_NONREPARENTING: '1',
LS_COLORS: 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=00:tw=30;42:ow=47;40;01:st=37;44:ex=01;32:*.7z=01;31:*.ace=01;31:*.alz=01;31:*.apk=01;31:*.arc=01;31:*.arj=01;31:*.bz=01;31:*.bz2=01;31:*.cab=01;31:*.cpio=01;31:*.crate=01;31:*.deb=01;31:*.drpm=01;31:*.dwm=01;31:*.dz=01;31:*.ear=01;31:*.egg=01;31:*.esd=01;31:*.gz=01;31:*.jar=01;31:*.lha=01;31:*.lrz=01;31:*.lz=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.lzo=01;31:*.pyz=01;31:*.rar=01;31:*.rpm=01;31:*.rz=01;31:*.sar=01;31:*.swm=01;31:*.t7z=01;31:*.tar=01;31:*.taz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tgz=01;31:*.tlz=01;31:*.txz=01;31:*.tz=01;31:*.tzo=01;31:*.tzst=01;31:*.udeb=01;31:*.war=01;31:*.whl=01;31:*.wim=01;31:*.xz=01;31:*.z=01;31:*.zip=01;31:*.zoo=01;31:*.zst=01;31:*.avif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:*~=00;90:*#=00;90:*.bak=00;90:*.crdownload=00;90:*.dpkg-dist=00;90:*.dpkg-new=00;90:*.dpkg-old=00;90:*.dpkg-tmp=00;90:*.old=00;90:*.orig=00;90:*.part=00;90:*.rej=00;90:*.rpmnew=00;90:*.rpmorig=00;90:*.rpmsave=00;90:*.swp=00;90:*.tmp=00;90:*.ucf-dist=00;90:*.ucf-new=00;90:*.ucf-old=00;90:',
XDG_CURRENT_DESKTOP: 'Hyprland',
POSH_SHELL_VERSION: '5.9',
POSH_SESSION_ID: 'a3aeff54-9c5f-4443-abca-4d875d85004b',
WAYLAND_DISPLAY: 'wayland-1',
KITTY_WINDOW_ID: '1',
OSTYPE: 'linux-gnu',
CONDA_PROMPT_MODIFIER: 'false',
XDG_SEAT_PATH: '/org/freedesktop/DisplayManager/Seat0',
QT_QPA_PLATFORM: 'wayland;xcb',
NVM_DIR: '/home/null/.nvm',
XDG_SESSION_CLASS: 'user',
TERM: 'xterm-kitty',
TERMINFO: '/usr/lib/kitty/terminfo',
ZSH: '/home/null/.oh-my-zsh',
USER: 'null',
SUDO_EDITOR: 'nvim',
HYPRLAND_INSTANCE_SIGNATURE: '12f9a0d0b93f691d4d9923716557154d74777b0a_1732164158_630937349',
VISUAL: 'nvim',
DISPLAY: ':1',
SHLVL: '2',
NVM_CD_FLAGS: '-q',
MOZ_ENABLE_WAYLAND: '1',
PAGER: 'less',
LC_MEASUREMENT: 'en_AU.UTF-8',
XDG_VTNR: '1',
XDG_SESSION_ID: '2',
WLR_DRM_NO_ATOMIC: '1',
POSH_THEME: '/home/null/.config/oh-my-posh/catppuccin_mocha.omp.json',
XDG_RUNTIME_DIR: '/run/user/1000',
HAXE_STD_PATH: '/usr/share/haxe/std',
DEBUGINFOD_URLS: 'https://debuginfod.archlinux.org ',
LC_TIME: 'en_AU.UTF-8',
QT_AUTO_SCREEN_SCALE_FACTOR: '1',
PATH: '/home/null/.nvm/versions/node/v22.4.1/bin:/home/null/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/null/.local/share/bin:/usr/local/go/bin:/home/null/go/bin:/home/null/.local/bin',
__GLX_VENDOR_LIBRARY_NAME: 'nvidia',
GDK_SCALE: '1',
DBUS_SESSION_BUS_ADDRESS: 'unix:path=/run/user/1000/bus',
FZF_DEFAULT_OPTS: ' --color=bg+:#313244,bg:#1e1e2e,spinner:#f5e0dc,hl:#f38ba8 --color=fg:#cdd6f4,header:#f38ba8,info:#cba6f7,pointer:#f5e0dc --color=marker:#b4befe,fg+:#cdd6f4,prompt:#cba6f7,hl+:#f38ba8 --color=selected-bg:#45475a --multi',
MAIL: '/var/spool/mail/null',
NVM_BIN: '/home/null/.nvm/versions/node/v22.4.1/bin',
POWERLINE_COMMAND: 'oh-my-posh',
KITTY_INSTALLATION_DIR: '/usr/lib/kitty',
LC_NUMERIC: 'en_AU.UTF-8',
OLDPWD: '/home/null',
GOPATH: '/home/null/go',
_: '/opt/visual-studio-code/bin/../code',
VSCODE_CWD: '/data',
VSCODE_NLS_CONFIG: '{"userLocale":"en-us","osLocale":"c","resolvedLanguage":"en","defaultMessagesFile":"/opt/visual-studio-code/resources/app/out/nls.messages.json","locale":"en-us","availableLanguages":{}}',
VSCODE_CLI: '1',
ELECTRON_NO_ATTACH_CONSOLE: '1',
ELECTRON_ENABLE_LOGGING: '1',
CHROME_DESKTOP: 'code.desktop',
ORIGINAL_XDG_CURRENT_DESKTOP: 'Hyprland',
GDK_BACKEND: 'wayland',
NO_AT_BRIDGE: '1',
VSCODE_CODE_CACHE_PATH: '/home/null/.config/Code/CachedData/f1a4fb101478ce6ec82fe9627c43efbf9e98c813',
VSCODE_IPC_HOOK: '/run/user/1000/vscode-94dac0b4-1.95-main.sock'
}
[90m[main 2024-11-21T05:55:22.673Z][0m Lifecycle#unload() - window ID 1
[30879:1121/165522.674036:INFO:CONSOLE(35)] "%cTRACE color: #888 [lifecycle] onBeforeUnload (reason: 3)", source: vscode-file://vscode-app/opt/visual-studio-code/resources/app/out/vs/workbench/workbench.desktop.main.js (35)
[30879:1121/165522.675260:INFO:CONSOLE(35)] "%cTRACE color: #888 [lifecycle] onBeforeUnload continues without veto", source: vscode-file://vscode-app/opt/visual-studio-code/resources/app/out/vs/workbench/workbench.desktop.main.js (35)
[30879:1121/165522.675664:INFO:CONSOLE(35)] "%cTRACE color: #888 [lifecycle] onWillUnload (reason: 3)", source: vscode-file://vscode-app/opt/visual-studio-code/resources/app/out/vs/workbench/workbench.desktop.main.js (35)
```
2.txt
```
[1121/165522.542980:ERROR:file_io_posix.cc(153)] open /home/null/.config/Code/Crashpad/pending/91c90fbb-2b46-4217-b67e-f374882324e6.lock: File exists (17)
[31146:1121/165522.544382:WARNING:wayland_object.cc(165)] Binding to wl_seat version 8 but version 9 is available.
[31146:1121/165522.544439:WARNING:wayland_object.cc(165)] Binding to zwp_pointer_gestures_v1 version 1 but version 3 is available.
[31146:1121/165522.544485:WARNING:wayland_object.cc(165)] Binding to zwp_linux_dmabuf_v1 version 3 but version 5 is available.
[90m[main 2024-11-21T05:55:22.662Z][0m [File Watcher (node.js)] Request to start watching: /home/null/.config/Code/User (excludes: <none>, includes: <all>, filter: <none>, correlationId: <none>),/home/null/.config/Code/User/settings.json (excludes: <none>, includes: <all>, filter: <none>, correlationId: <none>)
[90m[main 2024-11-21T05:55:22.668Z][0m Sending env to running instance...
[90m[main 2024-11-21T05:55:22.669Z][0m [File Watcher (node.js)] Started watching: '/home/null/.config/Code/User'
[90m[main 2024-11-21T05:55:22.669Z][0m [File Watcher (node.js)] Started watching: '/home/null/.config/Code/User/settings.json'
[90m[main 2024-11-21T05:55:22.674Z][0m Sent env to running instance. Terminating...
[90m[main 2024-11-21T05:55:22.674Z][0m Lifecycle#kill()
[90m[main 2024-11-21T05:55:22.674Z][0m Lifecycle#onWillShutdown.fire()
``` | bug,workbench-os-integration,issue-reporter | low | Critical |
2,678,213,359 | pytorch | Addmm is not always better than add + mm | ### 🐛 Describe the bug
I met a performance problem when training GPT-3-like models, which contain matrix multiplication like [b, 12288] @ [12288, 36864] (QKV) and [b, 12288] @ [12288, 49152] (FFN). And I found out that the MFU of matrix multiplication here is pretty low. So I went deeper and found out a strange phenomenon: the performance of addmm is worse than mm + add in some cases.
For [32768, 12288] @ [12288, 36864], when initializing the values with zeros, we got 155ms for addmm_op, 104.44ms for separate_ops. The MFU is 0.61 and 0.91, respectively.
For [32768, 12288] @ [12288, 49152], when initializing the values with zeros, we got 206ms for addmm_op, 138.84 for separate_ops. The MFU is 0.61 and 0.91, respectively.
Below are my codes for this experiement. I think fused operators ought to be better than non-fusion ones. What could be wrong?
```python
import torch
from torch.utils.benchmark import Timer
def profile_addmm(data_size, h1, h2):
# @torch.compile
def addmm_op(input, mat1, mat2):
return torch.addmm(input, mat1, mat2)
# @torch.compile
def separate_ops(input, mat1, mat2):
return input + torch.mm(mat1, mat2)
print(f'data_size: {data_size}, h1: {h1}, h2: {h2}')
FLOPS = 2 * data_size * h1 * h2
bias = torch.randn(h2, device='cuda', dtype=torch.half)
mat1 = torch.randn(data_size, h1, device='cuda', dtype=torch.half)
mat2 = torch.randn(h1, h2, device='cuda', dtype=torch.half)
bias.zero_()
mat1.zero_()
mat2.zero_()
t1 = Timer(stmt='addmm_op(bias, mat1, mat2)', globals=locals())
t2 = Timer(stmt='separate_ops(bias, mat1, mat2)', globals=locals())
addmm_time = t1.timeit(10).mean
mm_add_time = t2.timeit(10).mean
print(addmm_time)
print(f"addmm MFU: {FLOPS / addmm_time / 1e12 / 312}")
print(f"mm_add MFU: {FLOPS / mm_add_time / 1e12 / 312}")
if __name__ == '__main__':
profile_addmm(32768, 12288, 36864)
# randn: 156ms for addmm_op, 122.55ms for separate_ops
# zeros: 154ms for addmm_op, 104ms for separate_ops
profile_addmm(32768, 12288, 49152)
# randn: 208.6ms for addmm_op, 167.85ms for separate_ops
# zeros: 206ms for addmm_op, 138.84ms for separate_ops
```
I also checked the kernels they called with torch profiler. For [32768, 12288] @ [12288, 49152], addmm calls the `sm80_xmma_gemm_f16f16_f16f32_f32_nn_n_tilesize160x128x32_stage4_warpsize2x2x1_tensor16x8x16_kernel`, and seperate_op calls the `ampere_fp16_s16816gemm_fp16_128x256_ldg8_f2f_stages_64x3_nn` and `void at::native::elementwise_kernel<128, 4, at::native::gpu_kernel_impl_nocast<at::native::CUDAFunctor_add<c10::Half> >(at::TensorIteratorBase&, at::native::CUDAFunctor_add<c10::Half> const&)::{lambda(int)#1}>(int, at::native::gpu_kernel_impl_nocast<at::native::CUDAFunctor_add<c10::Half> >(at::TensorIteratorBase&, at::native::CUDAFunctor_add<c10::Half> const&)::{lambda(int)#1})`.
If I compile the seprate_op function first, it will be fused to be an addmm operator, also resulting in performance degradation.
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.17
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA Graphics Device
GPU 1: NVIDIA Graphics Device
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.2
/usr/lib64/libcudnn_adv_infer.so.8.9.2
/usr/lib64/libcudnn_adv_train.so.8.9.2
/usr/lib64/libcudnn_cnn_infer.so.8.9.2
/usr/lib64/libcudnn_cnn_train.so.8.9.2
/usr/lib64/libcudnn_ops_infer.so.8.9.2
/usr/lib64/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-35
Off-line CPU(s) list: 36-255
Thread(s) per core: 0
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7713 64-Core Processor
Stepping: 1
CPU MHz: 3075.921
CPU max MHz: 2000.0000
CPU min MHz: 1500.0000
BogoMIPS: 3992.55
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] denoising-diffusion-pytorch==2.0.17
[pip3] ema-pytorch==0.6.2
[pip3] flake8==7.0.0
[pip3] numpy==1.24.1
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-fid==0.3.0
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[pip3] torchviz==0.0.2
[pip3] triton==3.0.0
[conda] denoising-diffusion-pytorch 2.0.17 pypi_0 pypi
[conda] ema-pytorch 0.6.2 pypi_0 pypi
[conda] numpy 1.24.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-fid 0.3.0 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchaudio 2.4.1 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @msaroufim | module: performance,triaged,matrix multiplication | low | Critical |
2,678,268,360 | langchain | DOC: What is the maximum chunk size returned from SemanticChunker.split_documents() | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I need to know on the maximum chunk size that can be return from SemanticChunker.split_documents() for large documents.
Can it be more than 8k token as i need to send the chunk for embedding, chunk with more than 8k token will fail with Azure embedding model.
Need Help!!
### Idea or request for content:
Documentation should cleary explain this | 🤖:docs | low | Minor |
2,678,313,407 | transformers | When set num_beams in GenerationConfig, stop_strings parameter has no effect | ### System Info
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.46.2
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4070 SUPER
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## Code
```python
from transformers import GenerationConfig, AutoTokenizer, AutoModelForCausalLM
generate_config = GenerationConfig(
num_beams=3,
do_sample=True,
temperature=0.7,
num_return_sequences=3,
top_k=50,
top_p=0.95,
repetition_penalty=1.0,
length_penalty=1.0,
stop_strings=":",
return_dict_in_generate=True,
max_new_tokens=500,
output_logits=True
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH).cuda()
PROMPT = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
tokens = tokenizer(PROMPT, return_tensors="pt").to(model.device)
out = model.generate(**tokens, generation_config=generate_config, tokenizer=tokenizer)
print(tokenizer.decode(out.sequences[0], skip_special_tokens=True))
```
## Out
```
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? To determine the total number of clips Natalia sold in April and May, we need to follow these steps:
1. Identify the number of clips sold in April.
2. Calculate the number of clips sold in May.
3. Add the number of clips sold in April and May together.
First, we know that Natalia sold 48 clips in April. Next, we need to find out how many clips she sold in May. According to the problem, she sold half as many clips in May as she did in April. Therefore, we calculate the number of clips sold in May as follows:
\[
\text{Number of clips sold in May} = \frac{48}{2} = 24
\]
Now, we add the number of clips sold in April and May together to find the total number of clips sold:
\[
\text{Total number of clips sold} = 48 + 24 = 72
\]
Thus, the total number of clips Natalia sold in April and May is \boxed{72}.
```
### Expected behavior
when I set num_beams=1, the stop_strings works well ! | bug | low | Major |
2,678,328,201 | pytorch | Implement ADOPT Optimizer into torch.optim module | ### 🚀 The feature, motivation and pitch
I came across a newly published optimization method described in the paper [ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate](https://arxiv.org/pdf/2411.02853), which demonstrates impressive performance improvements. Does the PyTorch team have any plans to integrate this method into the torch.optim module?
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | module: optimizer,triaged,enhancement | low | Major |
2,678,342,063 | pytorch | [dynamo] Better support for dict subclasses | ### 🐛 Describe the bug
We should go through all methods on dict, and check that they are supported with dict subclasses - https://docs.python.org/3/library/stdtypes.html#dict
For starter, lets make the following test pass.
I am not sure if we can support mutations on dict subclasses.
```
diff --git a/test/dynamo/test_functions.py b/test/dynamo/test_functions.py
index 54342ac50e8..2840b2b124c 100644
--- a/test/dynamo/test_functions.py
+++ b/test/dynamo/test_functions.py
@@ -1290,6 +1290,48 @@ class FunctionTests(torch._dynamo.test_case.TestCase):
res = opt_fn(x)
self.assertEqual(ref, res)
+ def test_dict_subclass(self):
+ class MyDict(dict):
+ pass
+
+ a = {
+ "a": 3,
+ "b": 5,
+ "c": 7,
+ }
+ d = MyDict(a)
+
+ def fn(x, d):
+ # all read-only functions on dict
+ x = x * len(list(d))
+ x = x * len(d)
+ x = x * d["b"]
+ if "c" in d:
+ x = x * 3
+ if "c" not in d:
+ x = x * 5
+ x = x * len(d.copy())
+ x = x * d.get("a")
+ x = x * d.get("d", 11)
+ count = 0
+ for _ in d.keys():
+ count += 1
+ for _ in d:
+ count += 1
+ x = x * count
+ for v in d.values():
+ x = x * v
+ for _, v in d.items(): # noqa: PERF102
+ x = x * v
+ return x
+
+ x = torch.randn(4)
+
+ ref = fn(x, d)
+ opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
+ res = opt_fn(x, d)
+ self.assertEqual(ref, res)
+
def test_unpack_mutable_map(self):
from collections.abc import MutableMapping
```
### Error logs
NA
### Versions
NA
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | high priority,triaged,oncall: pt2,module: dynamo,dynamo-user-empathy-day,dynamo-triage-june2024 | low | Critical |
2,678,413,066 | flutter | [google_maps_flutter] When drawing polygons, there is a lagging across the application. | ### Steps to reproduce
1. I defined my polygons list of type Set<Polygon> to the GoogleMap widget.
2. When the polygons data is completed, I update the state.
3. While drawing polygons, a lag occurs and lasts for a few seconds.
https://github.com/user-attachments/assets/d3f97724-bcc6-4d5b-839d-fcf182ca8044
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:ecommerce/constants/colors.dart';
import 'package:ecommerce/views/google_map/address/map_state.dart';
import 'package:ecommerce/widgets/page_appbar_widget.dart';
import 'package:flutter/material.dart';
import 'package:flutter_map/flutter_map.dart';
import 'package:google_maps_flutter/google_maps_flutter.dart';
import 'package:provider/provider.dart';
import 'package:url_launcher/url_launcher.dart';
class NewMapView extends StatefulWidget {
const NewMapView({super.key});
@override
State<NewMapView> createState() => _NewMapViewState();
}
class _NewMapViewState extends State<NewMapView> {
@override
void initState() {
context.read<MapState>().clearData();
WidgetsBinding.instance.addPostFrameCallback((_) async {
context.read<MapState>().setLoading(true);
await context.read<MapState>().getPolygonList();
await Future.delayed(const Duration(seconds: 2), () async {
// if (mounted) await context.read<MapState>().showServiceAreas(mapController);
});
});
super.initState();
}
@override
void dispose() {
super.dispose();
}
@override
Widget build(BuildContext context) {
return Stack(
children: [
Scaffold(
appBar: const PageAppBarWidget(text: 'Teslimat Adresi'),
body: GoogleMap(
polygons: context.watch<MapState>().polygons,
compassEnabled: false,
zoomControlsEnabled: false,
mapToolbarEnabled: false,
trafficEnabled: false,
buildingsEnabled: false,
myLocationButtonEnabled: false,
mapType: MapType.normal,
onMapCreated: (GoogleMapController controller) {
_controller = controller;
},
initialCameraPosition: const CameraPosition(
target: LatLng(40.8010121, 29.3905112),
zoom: 6.8,
),
),
),
if (context.watch<MapState>().isLoading)
const Dialog(
backgroundColor: Colors.black45,
insetPadding: EdgeInsets.zero,
shape: Border(),
child: Center(
child: CircularProgressIndicator(
color: ThemeColors.primaryColor,
),
),
)
],
);
}
}
```
</details>
### What target platforms are you seeing this bug on?
Android, iOS
### OS/Browser name and version | Device information
iOS 18.1, iPhone 16 Pro Max
### Does the problem occur on emulator/simulator as well as on physical devices?
Yes
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
enescerrahoglu@192 DEV % flutter doctor -v
[✓] Flutter (Channel stable, 3.24.5, on macOS 15.1.1 24B91 darwin-arm64, locale tr-TR)
• Flutter version 3.24.5 on channel stable at /Users/enescerrahoglu/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (8 days ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/enescerrahoglu/development/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (5 available)
• Enes iPhone’u (mobile) • 00008120-00142D511E04201E • ios • iOS 18.1 22B83
• iPhone 16 Pro Max (mobile) • 1B055A70-5EB8-483E-B63C-1897B4DC3247 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.85
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: performance,p: maps,package,perf: speed,team-ecosystem,has reproducible steps,P2,triaged-ecosystem,found in release: 3.24,found in release: 3.27 | low | Critical |
2,678,439,115 | pytorch | Floating point exception (core dumped) in `scaled_dot_product_attention` | ### 🐛 Describe the bug
Under specific inputs, `scaled_dot_product_attention` triggered a crash.
```python
import torch
query = torch.full((0,1,7,2,8,9,0,0,3,), 0, dtype=torch.half)
key = torch.full((0,0,7,), 0, dtype=torch.half)
value = torch.full((0,0,7,0,9,9,2,8,), 0, dtype=torch.half)
attn_mask = None
dropout_p = 9.88131e-324
is_causal = False
scale = 9.0072e+15
enable_gqa = True
torch.nn.functional.scaled_dot_product_attention(query, key, value, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal, scale=scale, enable_gqa=enable_gqa)
```
Output:
```
Floating point exception (core dumped)
```
ASAN Report
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==3684861==ERROR: AddressSanitizer: FPE on unknown address 0x7f2e2648481a (pc 0x7f2e2648481a bp 0x7fffafe0b360 sp 0x7fffafe0b330 T0)
#0 0x7f2e2648481a in decltype (((forward<long&>)({parm#1}))%((forward<long&>)({parm#2}))) std::modulus<void>::operator()<long&, long&>(long&, long&) const (/mnt/pytorch-2.5.0/torch/lib/libc10.so+0x11081a)
#1 0x7f2e2646cec1 in c10::SymInt::operator%(c10::SymInt const&) const /mnt/pytorch-2.5.0/c10/core/SymInt.cpp:69
#2 0x7f2e3a1f93a1 in pre_process_group_query_attention_input /mnt/pytorch-2.5.0/aten/src/ATen/native/transformers/attention.cpp:645
#3 0x7f2e3a1fdc79 in at::native::_scaled_dot_product_attention_math(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) /mnt/pytorch-2.5.0/aten/src/ATen/native/transformers/attention.cpp:861
#4 0x7f2e3da7a8de in wrapper_CompositeImplicitAutograd___scaled_dot_product_attention_math /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:6911
#5 0x7f2e3ddc2f1f in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#6 0x7f2e3ddc2f1f in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#7 0x7f2e3c761bd2 in std::tuple<at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double&&, bool&&, std::optional<at::Tensor> const&, std::optional<double>&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#8 0x7f2e3c4f0edf in std::tuple<at::Tensor, at::Tensor> c10::KernelFunction::call<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#9 0x7f2e3c4f0edf in std::tuple<at::Tensor, at::Tensor> c10::Dispatcher::call<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool>(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool)> const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#10 0x7f2e3c4f0edf in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool)>::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#11 0x7f2e3c4f0edf in at::_ops::_scaled_dot_product_attention_math::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:11567
#12 0x7f2e3a2072cc in at::_scaled_dot_product_attention_math(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/ops/_scaled_dot_product_attention_math.h:27
#13 0x7f2e3a1fc06a in at::native::scaled_dot_product_attention(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool) /mnt/pytorch-2.5.0/aten/src/ATen/native/transformers/attention.cpp:776
#14 0x7f2e3da7a71a in wrapper_CompositeImplicitAutograd__scaled_dot_product_attention /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:6904
#15 0x7f2e3ddc2677 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#16 0x7f2e3ddc2677 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#17 0x7f2e3bbaf90f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double&&, bool&&, std::optional<double>&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#18 0x7f2e3b7070b9 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#19 0x7f2e3b7070b9 in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool)> const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#20 0x7f2e3b7070b9 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool)>::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#21 0x7f2e3b7070b9 in at::_ops::scaled_dot_product_attention::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_2.cpp:14255
#22 0x7f2e7f7bb8a3 in at::scaled_dot_product_attention(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<double>, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/ops/scaled_dot_product_attention.h:27
#23 0x7f2e7f78d4f1 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_nn_functions.cpp:2689
#24 0x7f2e7f78dc51 in THPVariable_scaled_dot_product_attention /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_nn_functions.cpp:2691
#25 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#26 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#27 0x55292a in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1502
#28 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#29 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#30 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#31 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#32 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#33 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#34 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#35 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#36 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#37 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#38 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#39 0x7f2e87e21d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#40 0x7f2e87e21e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#41 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE (/mnt/pytorch-2.5.0/torch/lib/libc10.so+0x11081a) in decltype (((forward<long&>)({parm#1}))%((forward<long&>)({parm#2}))) std::modulus<void>::operator()<long&, long&>(long&, long&) const
==3684861==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: crash,module: error checking,triaged,module: edge cases,module: empty tensor,topic: fuzzer,module: sdpa | low | Critical |
2,678,451,564 | pytorch | Floating point exception (core dumped) in `_scaled_dot_product_flash_attention_for_cpu` | ### 🐛 Describe the bug
Under specific inputs, `_scaled_dot_product_flash_attention_for_cpu` triggered a crash.
By analyzing the results of ASAN, I think it may be different from the cause of #141218
```python
import torch
query = torch.full((7,9,0,7,), 0, dtype=torch.half)
key = torch.full((7,7,0,7,), 0, dtype=torch.half)
value = torch.full((7,7,0,7,), 0, dtype=torch.half)
dropout_p = 0
is_causal = True
atten_mask = None
scale = None
torch.ops.aten._scaled_dot_product_flash_attention_for_cpu(query, key, value, dropout_p, is_causal, attn_mask=atten_mask, scale=scale)
```
Output:
```
Floating point exception (core dumped)
```
ASAN report:
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==3698429==ERROR: AddressSanitizer: FPE on unknown address 0x7f647795d23d (pc 0x7f647795d23d bp 0x7ffd0fd3db80 sp 0x7ffd0fd3ce70 T0)
#0 0x7f647795d23d in cpu_flash_attention<c10::Half, c10::Half, 32, 512> /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/FlashAttentionKernel.cpp:370
#1 0x7f64775627b5 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/FlashAttentionKernel.cpp:1170
#2 0x7f647756411b in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/FlashAttentionKernel.cpp:1170
#3 0x7f6477564879 in flash_attention_kernel_impl /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/FlashAttentionKernel.cpp:1170
#4 0x7f6460d6fad4 in void at::native::DispatchStub<void (*)(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor>, std::optional<double>), at::native::flash_attention_kernel_DECLARE_DISPATCH_type>::operator()<at::Tensor&, at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double&, bool&, std::optional<at::Tensor> const&, std::optional<double>&>(c10::DeviceType, at::Tensor&, at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double&, bool&, std::optional<at::Tensor> const&, std::optional<double>&) /mnt/pytorch-2.5.0/aten/src/ATen/native/DispatchStub.h:233
#5 0x7f6460d64419 in at::native::_scaled_dot_product_flash_attention_cpu(at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>) /mnt/pytorch-2.5.0/aten/src/ATen/native/transformers/attention.cpp:922
#6 0x7f646358d941 in wrapper_CPU___scaled_dot_product_flash_attention_for_cpu /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:28466
#7 0x7f646398826c in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#8 0x7f646398826c in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#9 0x7f6461e64eaa in std::tuple<at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, double&&, bool&&, std::optional<at::Tensor> const&, std::optional<double>&&) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x14badeaa)
#10 0x7f6461bba1a9 in std::tuple<at::Tensor, at::Tensor> c10::KernelFunction::call<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double> >(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#11 0x7f6461bba1a9 in std::tuple<at::Tensor, at::Tensor> c10::Dispatcher::redispatch<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double> >(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>)> const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#12 0x7f6461a4cc83 in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#13 0x7f6461a4cc83 in at::_ops::_scaled_dot_product_flash_attention_for_cpu::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_1.cpp:12938
#14 0x7f646b066b91 in at::redispatch::_scaled_dot_product_flash_attention_for_cpu(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:18487
#15 0x7f646ad86933 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_1.cpp:3489
#16 0x7f646ad87afc in _scaled_dot_product_flash_attention_for_cpu /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_1.cpp:3491
#17 0x7f646afaaf0f in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#18 0x7f646afaaf0f in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#19 0x7f646b0222fa in call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor>(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, double, bool, const std::optional<at::Tensor>&, std::optional<double>), torch::autograd::VariableType::(anonymous namespace)::_scaled_dot_product_flash_attention_for_cpu>, std::tuple<at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, double, bool, const std::optional<at::Tensor>&, std::optional<double> > >, false, 0, 1, 2, 3, 4, 5, 6, const at::Tensor&, const at::Tensor&, const at::Tensor&, double, bool, const std::optional<at::Tensor>&, std::optional<double> > /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:506
#20 0x7f646affe6dd in call_functor_with_args_from_stack<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor>(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, double, bool, const std::optional<at::Tensor>&, std::optional<double>), torch::autograd::VariableType::(anonymous namespace)::_scaled_dot_product_flash_attention_for_cpu>, std::tuple<at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, double, bool, const std::optional<at::Tensor>&, std::optional<double> > >, false> /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:518
#21 0x7f646afab0e3 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:584
#22 0x7f64a65f1bec in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/BoxedKernel_impl.h:41
#23 0x7f64a65f46aa in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:46
#24 0x7f64a7d8e0f1 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:747
#25 0x7f645dfda912 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:461
#26 0x7f646e476610 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:465
#27 0x7f6470326a16 in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/runtime/register_c10_ops.cpp:13
#28 0x7f64703299b0 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:61
#29 0x7f647032978f in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:111
#30 0x7f6470329448 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#31 0x7f64a71f2020 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/include/c++/11/bits/std_function.h:590
#32 0x7f64a71e3416 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /mnt/pytorch-2.5.0/aten/src/ATen/core/stack.h:42
#33 0x7f64a71d5b45 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args const&, pybind11::kwargs const&, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:813
#34 0x7f64a71d74e0 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args const&, pybind11::kwargs const&, bool, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:892
#35 0x7f64a6cfc6ba in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/python/init.cpp:1771
#36 0x7f64a6e18e7e in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&, 0, 1, pybind11::detail::void_type> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1631
#37 0x7f64a6de6944 in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1600
#38 0x7f64a6d649b4 in operator() /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:296
#39 0x7f64a6d64bc1 in _FUN /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:267
#40 0x7f64a5887c89 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:987
#41 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#42 0x6153f4 in _PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:361
#43 0x54df40 in PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:373
#44 0x54df40 in PyCFunction_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:381
#45 0x54df40 in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1355
#46 0x606f86 in _PyObject_VectorcallDictTstate /usr/local/src/conda/python-3.13.0/Objects/call.c:146
#47 0x65dc9c in _PyObject_Call_Prepend /usr/local/src/conda/python-3.13.0/Objects/call.c:504
#48 0x65dc9c in slot_tp_call /usr/local/src/conda/python-3.13.0/Objects/typeobject.c:9537
#49 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#50 0x55292a in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1502
#51 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#52 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#53 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#54 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#55 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#56 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#57 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#58 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#59 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#60 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#61 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#62 0x7f64ae985d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#63 0x7f64ae985e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#64 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/FlashAttentionKernel.cpp:370 in cpu_flash_attention<c10::Half, c10::Half, 32, 512>
==3698429==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: crash,triaged,module: edge cases,module: empty tensor,topic: fuzzer,module: sdpa | low | Critical |
2,678,462,231 | flutter | Missing locales for lzh, nan, yue in 'gen_l10n_types' | ### Steps to reproduce
1. Go to `packages/flutter_tools/lib/src/localizations/gen_l10n_types.dart`
2. Find `final Set<String> _iso639Languages = <String>{`
3. `lzh`, `nan`, `yue` were not listed
4. Go to [IANA Language Subtag Registry](<https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry>)
5. `lzh`, `nan`, `yue` were listed as valid language subtags from ISO 639-3
6. Go to `lib/l10n/` of repository [Losses/rune](https://github.com/Losses/rune)
7. See incorrect BCP 47 language tags in file names `intl_zh_Hant_LZH.arb`, `intl_zh_Hant_NAN.arb`, `intl_zh_Hant_YUE.arb`
8. Open these files
9. See incorrect locale codes `"@@locale": "zh_Hant_LZH"`, `"@@locale": "zh_Hant_NAN"`, `"@@locale": "zh_Hant_YUE"`, being used
### Expected results
1. Go to `packages/flutter_tools/lib/src/localizations/gen_l10n_types.dart`
2. Find `final Set<String> _iso639Languages = <String>{`
3. `lzh`, `nan`, `yue` should be listed
### Actual results
1. Go to `packages/flutter_tools/lib/src/localizations/gen_l10n_types.dart`
2. Find `final Set<String> _iso639Languages = <String>{`
3. `lzh`, `nan`, `yue` were not listed
See:
* https://github.com/Losses/rune/issues/89
* https://github.com/flutter/flutter/issues/60631
### Code sample
<details open><summary>Code sample</summary>
Create localization files
* lib/l10n/intl_lzh.arb
* lib/l10n/intl_nan_Hant.arb
* lib/l10n/intl_yue_Hant.arb
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>


</details>
### Logs
<details open><summary>Logs</summary>
(not provided)
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
(not provided)
</details>
| tool,framework,a: internationalization,P2,team-tool,triaged-tool | low | Minor |
2,678,553,843 | rust | [LoongArch] SIMD intrinsics not fully inlined in caller with target feature globally enabled | I tried this code:
```rust
#![feature(stdarch_loongarch)]
use std::arch::loongarch64::*;
pub unsafe fn simd(s: i32) -> i32 {
lsx_vpickve2gr_b::<0>(lsx_vreplgr2vr_b(s))
}
```
```sh
rustc --crate-type lib -C opt-level=3 --emit llvm-ir -o lsx.ll lsx.rs
```
I expected to see this happen:
The `lsx` intrinsics are inlined within `simd` functions when the `lsx` target feature is globally enabled.
```llvm
; loong64::simd
; Function Attrs: nofree nosync nounwind memory(none) uwtable
define noundef i32 @_ZN7loong644simd17h54d99178ac0d0f82E(i32 noundef signext %s) unnamed_addr #0 {
start:
%_2 = tail call <16 x i8> @llvm.loongarch.lsx.vreplgr2vr.b(i32 noundef %s) #2
%_0 = tail call noundef i32 @llvm.loongarch.lsx.vpickve2gr.b(<16 x i8> %_2, i32 noundef 0) #2
ret i32 %_0
}
; Function Attrs: nofree nosync nounwind memory(none)
declare <16 x i8> @llvm.loongarch.lsx.vreplgr2vr.b(i32) unnamed_addr #1
; Function Attrs: nofree nosync nounwind memory(none)
declare i32 @llvm.loongarch.lsx.vpickve2gr.b(<16 x i8>, i32 immarg) unnamed_addr #1
attributes #0 = { nofree nosync nounwind memory(none) uwtable "target-cpu"="generic" "target-features"="+f,+d,+lsx,+lsx,+d,+f" }
```
Instead, this happened:
```llvm
; core::core_arch::loongarch64::lsx::generated::lsx_vpickve2gr_b
; Function Attrs: inlinehint nofree nosync nounwind memory(argmem: read) uwtable
define internal fastcc noundef i32 @_ZN4core9core_arch11loongarch643lsx9generated16lsx_vpickve2gr_b17hbf4a6d8f95630043E(ptr noalias nocapture noundef readonly align 16 dereferenceable(16) %a) unnamed_addr #0 {
start:
%0 = load <16 x i8>, ptr %a, align 16
%_0 = tail call noundef i32 @llvm.loongarch.lsx.vpickve2gr.b(<16 x i8> %0, i32 noundef 0) #4
ret i32 %_0
}
; core::core_arch::loongarch64::lsx::generated::lsx_vreplgr2vr_b
; Function Attrs: inlinehint nofree nosync nounwind memory(argmem: write) uwtable
define internal fastcc void @_ZN4core9core_arch11loongarch643lsx9generated16lsx_vreplgr2vr_b17h0060558a0a7e8678E(ptr dead_on_unwind noalias nocapture noundef writable writeonly align 16 dereferenceable(16) %_0, i32 noundef signext %a) unnamed_addr #1 {
start:
%0 = tail call <16 x i8> @llvm.loongarch.lsx.vreplgr2vr.b(i32 noundef %a) #4
store <16 x i8> %0, ptr %_0, align 16
ret void
}
; loong64::simd
; Function Attrs: nofree nosync nounwind memory(none) uwtable
define noundef i32 @_ZN7loong644simd17h54d99178ac0d0f82E(i32 noundef signext %s) unnamed_addr #2 {
start:
%0 = alloca [16 x i8], align 16
; call core::core_arch::loongarch64::lsx::generated::lsx_vreplgr2vr_b
call fastcc void @_ZN4core9core_arch11loongarch643lsx9generated16lsx_vreplgr2vr_b17h0060558a0a7e8678E(ptr noalias nocapture noundef nonnull align 16 dereferenceable(16) %0, i32 noundef signext %s)
; call core::core_arch::loongarch64::lsx::generated::lsx_vpickve2gr_b
%_0 = call fastcc noundef i32 @_ZN4core9core_arch11loongarch643lsx9generated16lsx_vpickve2gr_b17hbf4a6d8f95630043E(ptr noalias nocapture noundef nonnull align 16 dereferenceable(16) %0)
ret i32 %_0
}
; Function Attrs: nofree nosync nounwind memory(none)
declare i32 @llvm.loongarch.lsx.vpickve2gr.b(<16 x i8>, i32 immarg) unnamed_addr #3
; Function Attrs: nofree nosync nounwind memory(none)
declare <16 x i8> @llvm.loongarch.lsx.vreplgr2vr.b(i32) unnamed_addr #3
attributes #0 = { inlinehint nofree nosync nounwind memory(argmem: read) uwtable "target-cpu"="generic" "target-features"="+f,+d,+lsx,+lsx,+d,+f" }
attributes #1 = { inlinehint nofree nosync nounwind memory(argmem: write) uwtable "target-cpu"="generic" "target-features"="+f,+d,+lsx,+lsx,+d,+f" }
attributes #2 = { nofree nosync nounwind memory(none) uwtable "target-cpu"="generic" "target-features"="+f,+d,+lsx" }
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (3fee0f12e 2024-11-20)
binary: rustc
commit-hash: 3fee0f12e4f595948f8f54f57c8b7a7a58127124
commit-date: 2024-11-20
host: loongarch64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
`rustc -Z unstable-options --print target-spec-json`:
```
{
"arch": "loongarch64",
"code-model": "medium",
"crt-objects-fallback": "false",
"crt-static-respected": true,
"data-layout": "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128",
"direct-access-external-data": false,
"dynamic-linking": true,
"env": "gnu",
"features": "+f,+d,+lsx",
"has-rpath": true,
"has-thread-local": true,
"linker-flavor": "gnu-cc",
"llvm-abiname": "lp64d",
"llvm-target": "loongarch64-unknown-linux-gnu",
"max-atomic-width": 64,
"metadata": {
"description": "LoongArch64 Linux, LP64D ABI (kernel 5.19, glibc 2.36)",
"host_tools": true,
"std": true,
"tier": 2
},
"os": "linux",
"position-independent-executables": true,
"relro-level": "full",
"supported-sanitizers": [
"address",
"leak",
"memory",
"thread",
"cfi"
],
"supported-split-debuginfo": [
"packed",
"unpacked",
"off"
],
"supports-xray": true,
"target-family": [
"unix"
],
"target-pointer-width": "64"
}
``` | A-LLVM,T-compiler,A-SIMD,C-bug,llvm-fixed-upstream,O-loongarch | low | Critical |
2,678,624,394 | pytorch | torch.nn.functional.conv2d computes incorrect output when dilation>1 and dtype=torch.float64 on Windows | ### 🐛 Describe the bug
On Windows builds of PyTorch >=2.4.0, there seems to be a bug in `torch.nn.functional.conv2d` when handling `dilation` together with `float64`.
Code:
```
x = torch.ones(2, 1, 8, 8)
w = torch.ones(4, 1, 3, 3)
print(torch.nn.functional.conv2d(x, w, dilation=2).max())
print(torch.nn.functional.conv2d(x.double(), w.double(), dilation=2).max())
```
Outputs:
```
tensor(9.)
tensor(81., dtype=torch.float64) # clearly wrong
```
- It only happens on Windows builds and on CPU.
- It also depends on shapes. Seems that `C_out` must be at least 4 and `H, W` must be more than `2*kernel_size+1`.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Education (10.0.22631 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.10 | packaged by Anaconda, Inc. | (main, Oct 3 2024, 07:22:26) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Ti
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 9 7900X 12-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 4701
MaxClockSpeed: 4701
L2CacheSize: 12288
L2CacheSpeed: None
Revision: 24834
Versions of relevant libraries:
[pip3] torch==2.4.0
[conda] torch 2.4.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | high priority,module: binaries,module: windows,module: convolution,triaged,module: correctness (silent),module: intel | low | Critical |
2,678,627,964 | PowerToys | PowerToys Run: Doesn't show container in VSCode Workspaces | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
The container doesn't show up in VSCode Workspaces

Only remote server folders show up

Is it because the container on the remote server is not recognized?
### ✔️ Expected Behavior
list some containers
### ❌ Actual Behavior
no containers show up
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,678,645,102 | kubernetes | daemonset rolling update stuck | ### What happened?


I found that daemonset was stuck during the rolling upgrade, and one pod was not updated.
Or the numberUnavailable calculation result of daemonset is incorrect.
In addition, when deleting abnormal pods, daemonset does not delete all pods at a time. Instead, it deletes one pod every several minutes. The log is as follows:
I1121 13:08:27.938981 10 daemon_controller.go:849] "Found failed daemon pod on node, will try to kill it" pod="nce-omp/hofsfusedeviceplugin-d9p2d" node="caasnode1"
I1121 13:08:27.939171 10 controller_utils.go:609] "Deleting pod" controller="hofsfusedeviceplugin" pod="nce-omp/hofsfusedeviceplugin-d9p2d"
I1121 13:08:27.939480 10 event.go:307] "Event occurred" object="nce-omp/hofsfusedeviceplugin" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Warning" reason="FailedDaemonPod" message="Found failed daemon pod nce-omp/hofsfusedeviceplugin-d9p2d on node caasnode1, will try to kill it"
I1121 13:08:27.953742 10 event.go:307] "Event occurred" object="nce-omp/hofsfusedeviceplugin" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: hofsfusedeviceplugin-d9p2d"
I1121 13:23:27.940867 10 daemon_controller.go:849] "Found failed daemon pod on node, will try to kill it" pod="nce-omp/hofsfusedeviceplugin-85zzt" node="caasnode1"
I1121 13:23:27.941014 10 controller_utils.go:609] "Deleting pod" controller="hofsfusedeviceplugin" pod="nce-omp/hofsfusedeviceplugin-85zzt"
I1121 13:23:27.941335 10 event.go:307] "Event occurred" object="nce-omp/hofsfusedeviceplugin" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Warning" reason="FailedDaemonPod" message="Found failed daemon pod nce-omp/hofsfusedeviceplugin-85zzt on node caasnode1, will try to kill it"
I1121 13:23:27.953140 10 event.go:307] "Event occurred" object="nce-omp/hofsfusedeviceplugin" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: hofsfusedeviceplugin-85zzt"
### What did you expect to happen?
The daemonset should roll the upgrade correctly.
### How can we reproduce it (as minimally and precisely as possible)?
N/A
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.30
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/apps,needs-triage | low | Critical |
2,678,668,564 | PowerToys | [Power Toys-System Tool >Text Extractor]: Name is not defined for the Cancel icon button on Power Toy Text Extractor. | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
TextExtractor
### Steps to reproduce
**Test Environment:**
**OS Build:** Windows 11 version Dev (OS Build 27749.1000)
**Application:** Power toys version: v0.86.0
**Screen reader:** Narrator
**Repro Steps:**
1. Open the Microsoft PowerToys app and turn on Narrator.
2. Go to the "System Tools" tab and expand it.
3. Use the down arrow key to navigate to "Text Extractor" and press Enter.
4. Toggle on the "Enable Text Extractor" option.
5. Press "Windows+Shift+T".
6. Use the Tab key to navigate to the "Cancel" button and listen for the announcement.
**Note:**
When navigating to this button with JAWS or NVDA, these screen readers only announce the role ("button") without the name.
**User experience:**
Screen reader users are unable to understand the purpose of the Cancel button as it lacks a proper accessible name. This creates usability challenges, especially for users relying on assistive technologies.
**Guideline Reference:**
https://www.w3.org/WAI/WCAG22/Understanding/name-role-value.html
### ✔️ Expected Behavior
The "Cancel" button should be announced with the name first, followed by the role.
### ❌ Actual Behavior
When navigating to the "Cancel" button, Narrator announces it as "button cancel," which seems unusual. Typically, the name of the control is announced first, followed by its role.
### Other Software
https://github.com/user-attachments/assets/dd94b751-5d89-46bd-842e-27b443c4445c
| Issue-Bug,Resolution-Fix Committed,Area-Accessibility,A11yE+D,A11ySev3,A11yWCAG,Product-Text Extractor | low | Minor |
2,678,696,436 | PowerToys | [Power Toys-System Tool >Text Extractor]: When navigating to the 'Select Language' combo box, the screen reader does not announce the currently selected language. | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
TextExtractor
### Steps to reproduce
**Test Environment:**
**OS Build:** Windows 11 version Dev (OS Build 27749.1000)
**Application:** Power toys version: v0.86.0
**Screen reader:** Narrator
**Repro Steps:**
1. Open the Microsoft PowerToys app and turn on Narrator.
2. Go to the "System Tools" tab and expand it.
3. Use the down arrow key to navigate to "Text Extractor" and press Enter.
4. Toggle on the "Enable Text Extractor" option.
5. Press "Windows+Shift+T".
6. Use the Tab key to navigate to the "Select language" combo box and listen for the announcement.
**User experience:**
Screen reader users are unable to know the currently selected language when navigating to the combo box, leading to confusion and difficulty in making informed choices.
**Guideline Reference:**
https://www.w3.org/WAI/WCAG22/Understanding/info-and-relationships.html
### ✔️ Expected Behavior
Screen reader should announce the currently selected language in the combo box.
### ❌ Actual Behavior
When navigating to the "Select Language" combo box, the screen reader does not announce the currently selected language.
### Other Software
https://github.com/user-attachments/assets/fe02f82c-97ff-447e-86d7-af06c16ee8f9
| Issue-Bug,Resolution-Fix Committed,Area-Accessibility,A11yE+D,A11ySev3,A11yWCAG,Product-Text Extractor | low | Minor |
2,678,709,937 | react-native | InputAccessoryView stopped working with multiple Inputs using the same inputAccessoryViewID | ### Description
After upgrading the project to Expo 52 I discovered, that the InputAccessoryView stopped working, if there are multiple TextInputs, that share the same inputAccessoryViewID.
For clarity, I reproduced the issue in the snack.expo.dev directly in the example on InputAccessoryView.
If you run the example with Expo 51 (react-native v0.74.5) everything works as expected, it doesn't matter, if I focus the first or the second TextInput.
But if I change the Expo version to 52 (react-native v0.76.2), the InputAccessoryView shows only after focusing the first TextInput.
At first I reported the issue in the expo repository https://github.com/expo/expo/issues/33069 .
But it looks like the issue is in the react-native library
### Steps to reproduce
1. Open https://snack.expo.dev/LrCtZwQpPEm0SswYOmSQn
2. In bottom right corner select Expo v52.0.0 and enable Preview
3. In upper right corner select iOS
4. When the app starts running click on the first input -> The InputAccessoryView is there
5. then click on the second input -> The InputAccessoryView dissapears
If you repeat these steps but with the Expo v51.0.0 the InputAccessoryView is visiblle for both Inputs.
### React Native Version
0.76.2
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
.
```
### Stacktrace or Logs
```text
.
```
### Reproducer
https://snack.expo.dev/LrCtZwQpPEm0SswYOmSQn
### Screenshots and Videos
_No response_ | Component: InputAccessoryView,Needs: Triage :mag: | low | Major |
2,678,821,676 | PowerToys | [Power Rename] Use Date Modified rather than Date Created | ### Description of the new feature / enhancement
Use Date Modified rather than Date Created
Even better, add an option and/or regex flag to specify which date is to be used.
Date Created is the least useful of all the likely available dates in my experience. It is usually the date the file was created on the current system, not the date the file was originally created or was last modified. The only date less useful would be date accessed.
There seem to be several similar requests which have been closed for no apparent reason.
### Scenario when this would be used?
I currently use other third-party software to rename using date modified or date taken etc., but it would be nice to be able to do simple timestamp-based renaming in this tool. I find that date created is almost never useful - for example, for content copied from a mobile device to a computer date created is usually the date that the files were copied (almost never useful) whereas date modified will be the time the files were actually last modified (actually useful)
Some examples where this would be useful:
- Renaming photos taken on a phone and copied to my computer
- Renaming files created on any external non-Windows system and copied to my computer where the created date will be when they were copied (not useful) and the modified date will be when they were actually last modified (useful, and depending on the file, probably when it was actually created on its original system)
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,678,822,654 | rust | Using `Release` in the `store` operation for `make_mut` just prevent out-of-thin-air value? | In https://doc.rust-lang.org/src/alloc/sync.rs.html#2267, the `make_mut` is implemented as
````rust
if this.inner().strong.compare_exchange(1, 0, Acquire, Relaxed).is_err(){
// ...
}else if this.inner().weak.load(Relaxed) != 1 { // #0
//...
}else{
this.inner().strong.store(1, Release); // #1
}
````
while `upgrade` is
````rust
if self.inner()?.strong.fetch_update(Acquire, Relaxed, checked_increment).is_ok(){
Some(...)
}else{
None
}
````
The `Release` at `#1` concerns this case
````rust
// thread 1:
let mut_ref = Arc::make_mut(&mut my_arc);
*mut_ref = 10;
//thread 2:
let arc = weak.upgrade().unwrap();
drop(weak);
println!("{}", *arc);
````
In this case, `upgrade` will return `Some` if it reads `#1` and `#1` is executed only `#0` reads the value written by `weak.fetch_sub(1, Release)` in `drop(weak);`, so the model can be simplified as
````
weak = 2;
strong = 0;
// thread 1:
if weak.load(Relaxed) == 1{ // #1
strong.store(1, Relaxed) // if the Release is changed to Relaxed // #2
}
// thread 2:
while strong.load(Relaxed) !=0{ // #3
}
weak.store(1, Relaxed); // #4
````
#1 read #4, #4 is executed only if `#3` read `#2`, `#2` is executed only if `#1` read `#4`. It depends on the out-of-thin-air value. Do we need a `release` memory order here to prevent OOTD? | T-libs,C-discussion,A-atomic | low | Minor |
2,678,945,628 | langchain | Can't import ChatOpenAI: The `__modify_schema__` method is not supported in Pydantic v2. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code straight from the docs fails already at the import statement:
```
from typing import Optional
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
# Pydantic
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(default=None, description="How funny the joke is, from 1 to 10")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
```
### Error Message and Stack Trace (if applicable)
```
C:\Users\Egor\Dropbox\Code\langchain\libs\partners\openai\langchain_openai\chat_models\__init__.py:1: LangChainDeprecationWarning: As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. The langchain_core.pydantic_v1 module was a compatibility shim for pydantic v1, and should no longer be used. Please update the code to import from Pydantic directly.
C:\Users\Egor\Dropbox\Code\langchain\libs\partners\openai\langchain_openai\chat_models\__init__.py:1: LangChainDeprecationWarning: As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. The langchain_core.pydantic_v1 module was a compatibility shim for pydantic v1, and should no longer be used. Please update the code to import from Pydantic directly.
For example, replace imports like: `from langchain_core.pydantic_v1 import BaseModel`
with: `from pydantic import BaseModel`
or the v1 compatibility namespace if you are working in a code base that has not been fully upgraded to pydantic 2 yet. from pydantic.v1 import BaseModel
from langchain_openai.chat_models.azure import AzureChatOpenAI
C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'allow_population_by_field_name' has been renamed to 'populate_by_name'
warnings.warn(message, UserWarning)
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\Egor\Dropbox\Code\langchain\libs\partners\openai\langchain_openai\__init__.py", line 1, in <module>
from langchain_openai.chat_models import AzureChatOpenAI, ChatOpenAI
File "C:\Users\Egor\Dropbox\Code\langchain\libs\partners\openai\langchain_openai\chat_models\__init__.py", line 1, in <module>
from langchain_openai.chat_models.azure import AzureChatOpenAI
File "C:\Users\Egor\Dropbox\Code\langchain\libs\partners\openai\langchain_openai\chat_models\azure.py", line 41, in <module>
from langchain_openai.chat_models.base import BaseChatOpenAI
File "C:\Users\Egor\Dropbox\Code\langchain\libs\partners\openai\langchain_openai\chat_models\base.py", line 353, in <module>
class BaseChatOpenAI(BaseChatModel):
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_model_construction.py", line 226, in __new__
complete_model_class(
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_model_construction.py", line 658, in complete_model_class
schema = cls.__get_pydantic_core_schema__(cls, handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\main.py", line 697, in __get_pydantic_core_schema__
return handler(source)
^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_schema_generation_shared.py", line 84, in __call__
schema = self._handler(source_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 612, in generate_schema
schema = self._generate_schema_inner(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 881, in _generate_schema_inner
return self._model_schema(obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 693, in _model_schema
{k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 693, in <dictcomp>
{k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 1073, in _generate_md_field_schema
common_field = self._common_field_schema(name, field_info, decorators)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 1261, in _common_field_schema
schema = self._apply_annotations(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 2051, in _apply_annotations
schema = get_inner_schema(source_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_schema_generation_shared.py", line 84, in __call__
schema = self._handler(source_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 2032, in inner_handler
schema = self._generate_schema_inner(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 886, in _generate_schema_inner
return self.match_type(obj)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 988, in match_type
return self._match_generic_type(obj, origin)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 1016, in _match_generic_type
return self._union_schema(obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 1323, in _union_schema
choices.append(self.generate_schema(arg))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 614, in generate_schema
metadata_js_function = _extract_get_pydantic_json_schema(obj, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Egor\.conda\envs\motleycrew3.11\Lib\site-packages\pydantic\_internal\_generate_schema.py", line 2384, in _extract_get_pydantic_json_schema
raise PydanticUserError(
pydantic.errors.PydanticUserError: The `__modify_schema__` method is not supported in Pydantic v2. Use `__get_pydantic_json_schema__` instead in class `SecretStr`.
For further information visit https://errors.pydantic.dev/2.10/u/custom-json-schema
Process finished with exit code 1
```
### Description
Trying to use the latest langchain_openai,
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.15 | packaged by Anaconda, Inc. | (main, Oct 3 2024, 07:22:19) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.19
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.144
> langchain_experimental: 0.3.3
> langchain_openai: 0.2.9
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.6
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.55.0
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.10.0
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
``` | investigate,Ɑ: core | low | Critical |
2,678,948,641 | langchain | DOC: Docs describe FAISS get_by_id but it isn't implementd | ### URL
https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.faiss.FAISS.html#langchain_community.vectorstores.faiss.FAISS.get_by_ids
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Docs for FAISS describe get_by_id but it doesn't seem to be implemented as yet.
### Idea or request for content:
I suggest either implementing get_by_id or removing it from the docs. | 🤖:docs,investigate | low | Minor |
2,678,999,877 | next.js | Server action - Internal error not handled if middleware is used Server action does not finish request on file size exceeded error | ### Link to the code that reproduces this issue
https://github.com/liri2006/next-file-upload-issue
### To Reproduce
1. Ensure app uses middleware.ts file
2. Start the app
3. Upload file > 1mb
4. Submit form
5. Check Network tab in browser
### Current vs. Expected behavior
Expected:
Error is returned to the client.
Actual:
Request is in permanent "pending" state, so form submit never ends. Error is printed in server logs though .
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32536
Available CPU cores: 16
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.12.3
Relevant Packages:
next: 15.0.4-canary.21 // Latest available version is detected (15.0.4-canary.21).
eslint-config-next: N/A
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed)
### Additional context
Works as expected if middleware file is deleted. | bug,Middleware,Runtime | low | Critical |
2,679,023,689 | excalidraw | Add additional characters to Excalifont | Let's gather here characters Excalifont is missing and should be considered to be added in the future:
- Kyrgyz: https://github.com/excalidraw/excalidraw/discussions/8810 | enhancement,font | low | Minor |
2,679,027,838 | excalidraw | UI scaling | I would like to have different scaling for excalidraw ui like idk maybe it happened after some update but now excalidraw ui is too big it auto scales with my obsidian ui but the thing is i have bad eyes and i need big obsidian ui so i can read but the excalidraw ui is too big and covers large area. I think we should have a feature to have separate scaling for excalidraw ui elements.
_Originally posted by @BlindAndInsane in https://github.com/excalidraw/excalidraw/discussions/8816_ | enhancement,UX/UI | low | Minor |
2,679,062,929 | flutter | Upgrade dependency of camera_android_camerax to support 16 KB memory page size | ### What package does this bug report belong to?
camera
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
camera_android_camerax:
dependency: transitive
description:
name: camera_android_camerax
sha256: e3627fdc2132d89212b8a8676679f5b07008c7e3d8ae00cea775c3397f9e742b
url: "https://pub.dev"
source: hosted
version: "0.6.10"
```
</details>
### Steps to reproduce
1. Build any app which depends on camera_android_camerax
2. Find library `libimage_processing_util_jni.so` in build directory `find build -name libimage_processing_util_jni.so` and check alignment, for example
```
objdump -p build/app/intermediates/stripped_native_libs/release/stripReleaseDebugSymbols/out/lib/arm64-v8a/libimage_processing_util_jni.so | grep LOAD
```
it prints out something like
```
LOAD off 0x0000000000000000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**12
LOAD off 0x000000000000193c vaddr 0x000000000000293c paddr 0x000000000000293c align 2**12
LOAD off 0x0000000000006570 vaddr 0x0000000000008570 paddr 0x0000000000008570 align 2**12
LOAD off 0x0000000000006918 vaddr 0x0000000000009918 paddr 0x0000000000009918 align 2**12
```
### Expected results
Alignment should be more then or equal to `2**14`
### Actual results
Alignment is `2**12`
### Code sample
<details open><summary>Code sample</summary>
```dart
// Original code without any changes, only specifying camera_android_camerax in pubspec.lock is enough.
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.5, on macOS 14.7 23H124 darwin-arm64, locale en-DE)
• Flutter version 3.24.5 on channel stable at /Applications/here_av_excluded/Dependencies/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (8 days ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Applications/here_av_excluded/Dependencies/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Applications/here_av_excluded/Dependencies/Android/sdk
• Java binary at: /Applications/Android Studio 2.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 15.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15E204a
✗ Unable to get list of installed Simulator runtimes.
! CocoaPods 1.11.3 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2021.2)
• Android Studio at /Applications/Utilities/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 11.0.12+0-b1504.28-7817840)
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio 2.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] Android Studio (version 2023.1)
• Android Studio at /Users/khnykin/Downloads/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314)
[✓] IntelliJ IDEA Community Edition (version 2022.3.3)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.92.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.7 23H124 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.7 23H124 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.85
[✓] Network resources
• All expected network resources are available.
```
</details>
| platform-android,p: camera,package,P2,team-android,triaged-android | low | Critical |
2,679,064,391 | vscode | Status bar tooltip - add support for specifying a list of commands | It would be nice if extensions would be able to specify a list of commands which would then be passed on to the HoverService so that they are rendered in the hover status bar. | help wanted,feature-request,api,workbench-status | low | Minor |
2,679,069,925 | pytorch | FPE in `torch._scaled_dot_product_attention_math` | ### 🐛 Describe the bug
Under specific inputs, `torch._scaled_dot_product_attention_math` triggered a crash.
```python
import torch
query = torch.full((1,2,8,3,1,1,0,9,), 0, dtype=torch.float)
key = torch.full((0,3,7,1,1,2), 0, dtype=torch.float)
value = torch.full((6,0,1,3,), 0, dtype=torch.float)
attn_mask = None
dropout_p = 9.87654e+09
is_causal = True
dropout_mask = torch.full((1,3,64,64,), 0.5, dtype=torch.double)
scale = None
enable_gqa = True
torch._scaled_dot_product_attention_math(query=query, key=key, value=value, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal, dropout_mask=dropout_mask, scale=scale, enable_gqa=enable_gqa)
```
ASAN report:
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==261102==ERROR: AddressSanitizer: FPE on unknown address 0x7f04609d881a (pc 0x7f04609d881a bp 0x7ffcfcab2210 sp 0x7ffcfcab21e0 T0)
#0 0x7f04609d881a in decltype (((forward<long&>)({parm#1}))%((forward<long&>)({parm#2}))) std::modulus<void>::operator()<long&, long&>(long&, long&) const (/mnt/pytorch-2.5.0/torch/lib/libc10.so+0x11081a)
#1 0x7f04609c0ec1 in c10::SymInt::operator%(c10::SymInt const&) const /mnt/pytorch-2.5.0/c10/core/SymInt.cpp:69
#2 0x7f047474d436 in pre_process_group_query_attention_input /mnt/pytorch-2.5.0/aten/src/ATen/native/transformers/attention.cpp:646
#3 0x7f0474751c79 in at::native::_scaled_dot_product_attention_math(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) /mnt/pytorch-2.5.0/aten/src/ATen/native/transformers/attention.cpp:861
#4 0x7f0477fce8de in wrapper_CompositeImplicitAutograd___scaled_dot_product_attention_math /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:6911
#5 0x7f0478316f1f in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#6 0x7f0478316f1f in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#7 0x7f0476cb5bd2 in std::tuple<at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double&&, bool&&, std::optional<at::Tensor> const&, std::optional<double>&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#8 0x7f0476a44edf in std::tuple<at::Tensor, at::Tensor> c10::KernelFunction::call<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#9 0x7f0476a44edf in std::tuple<at::Tensor, at::Tensor> c10::Dispatcher::call<std::tuple<at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool>(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool)> const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#10 0x7f0476a44edf in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool)>::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#11 0x7f0476a44edf in at::_ops::_scaled_dot_product_attention_math::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:11567
#12 0x7f04b9965a2d in at::_scaled_dot_product_attention_math(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double, bool, std::optional<at::Tensor> const&, std::optional<double>, bool) (/mnt/pytorch-2.5.0/torch/lib/libtorch_python.so+0x22f9a2d)
#13 0x7f04b99322fd in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_0.cpp:11491
#14 0x7f04b9932b15 in THPVariable__scaled_dot_product_attention_math /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_0.cpp:11493
#15 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#16 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#17 0x55292a in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1502
#18 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#19 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#20 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#21 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#22 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#23 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#24 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#25 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#26 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#27 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#28 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#29 0x7f04c2375d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#30 0x7f04c2375e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#31 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE (/mnt/pytorch-2.5.0/torch/lib/libc10.so+0x11081a) in decltype (((forward<long&>)({parm#1}))%((forward<long&>)({parm#2}))) std::modulus<void>::operator()<long&, long&>(long&, long&) const
==261102==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: crash,triaged,module: edge cases,module: empty tensor,module: sdpa | low | Critical |
2,679,086,348 | langchain | PydanticUserError: `ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.memory import ConversationSummaryBufferMemory
memory = ConversationSummaryBufferMemory(
llm=chat,
input_key="input",
output_key="output",
max_token_limit=1024,
memory_key="chat_history",
)
```
### Error Message and Stack Trace (if applicable)
{
"name": "PydanticUserError",
"message": "`ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`.
For further information visit https://errors.pydantic.dev/2.10/u/class-not-fully-defined",
"stack": "---------------------------------------------------------------------------
PydanticUserError Traceback (most recent call last)
Cell In[45], line 3
1 from langchain.memory import ConversationSummaryBufferMemory
----> 3 memory = ConversationSummaryBufferMemory(
4 llm=chat,
5 input_key=\"input\",
6 output_key=\"output\",
7 max_token_limit=1024,
8 memory_key=\"chat_history\",
9 )
File c:\\xxxx\site-packages\\langchain_core\\_api\\deprecation.py:216, in deprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance(self, *args, **kwargs)
214 warned = True
215 emit_warning()
--> 216 return wrapped(self, *args, **kwargs)
File xxxx\site-packages\\langchain_core\\_api\\deprecation.py:216, in deprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance(self, *args, **kwargs)
214 warned = True
215 emit_warning()
--> 216 return wrapped(self, *args, **kwargs)
File xxxx\\site-packages\\langchain_core\\_api\\deprecation.py:216, in deprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance(self, *args, **kwargs)
214 warned = True
215 emit_warning()
--> 216 return wrapped(self, *args, **kwargs)
File xxxx\\site-packages\\langchain_core\\load\\serializable.py:125, in Serializable.__init__(self, *args, **kwargs)
123 def __init__(self, *args: Any, **kwargs: Any) -> None:
124 \"\"\"\"\"\"
--> 125 super().__init__(*args, **kwargs)
File xxxx\\site-packages\\langchain_core\\_api\\deprecation.py:216, in deprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance(self, *args, **kwargs)
214 warned = True
215 emit_warning()
--> 216 return wrapped(self, *args, **kwargs)
[... skipping hidden 1 frame]
File xxxx\site-packages\\pydantic\\_internal\\_mock_val_ser.py:100, in MockValSer.__getattr__(self, item)
98 # raise an AttributeError if `item` doesn't exist
99 getattr(self._val_or_ser, item)
--> 100 raise PydanticUserError(self._error_message, code=self._code)
PydanticUserError: `ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`.
For further information visit https://errors.pydantic.dev/2.10/u/class-not-fully-defined"
}
### Description
I've upgrade my project's langchain package to latest, and when i create ConversationSummaryBufferMemory, encountered the PydanticUserError
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.10 | packaged by Anaconda, Inc. | (main, Oct 3 2024, 07:22:26) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.19
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.144
> langchain_openai: 0.2.9
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.6
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.55.0
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.10.0
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug,investigate | low | Critical |
2,679,098,999 | rust | Tracking Issue for `const_array_each_ref` | Feature gate: `#![feature(const_array_each_ref)]`
This is a tracking issue for supporting the `each_ref` and `each_mut` methods in `[T; N]` in constant expressions.
### Public API
```rust
impl<T, const N: usize> [T; N] {
pub const fn each_ref(&self) -> [&T; N];
pub const fn each_mut(&mut self) -> [&mut T; N];
}
```
- [x] Implementation: #133288
- [ ] Final comment period (FCP)
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
| T-libs-api,C-tracking-issue | low | Minor |
2,679,099,085 | excalidraw | Saving to the same file leads to data loss | **Reproduction**
1. save "old scene"
2. delete all elements (to start from scratch)
3. press ctrl + s
**Actual result**
- Backed up "old scene" got just overridden | discussion | low | Critical |
2,679,158,873 | pytorch | `torch.cond` example raises an AssertionError due to wrong return values from `true_fn` inside class | ### 📚 The doc issue
Hi folks,
I noticed that the examples for data-dependent control flow raises an error with the example test case (using the `DynamicShapeCondPredicate(nn.Module)` class [here](https://pytorch.org/docs/stable/cond.html#examples).
Should the function `true_fn()` inside `forward()` actually read (as follows)?
```
def forward(self, x: torch.Tensor) -> torch.Tensor:
def true_fn(x: torch.Tensor):
return x.cos() + x.sin()
```
Thanks in advance! (and thank you for this project).
### Suggest a potential alternative/fix
```
import torch
def true_fn(x: torch.Tensor):
return x.cos() + x.sin()
def false_fn(x: torch.Tensor):
return x.sin()
class DynamicShapeCondPredicate(torch.nn.Module):
"""
A basic usage of cond based on dynamic shape predicate.
"""
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor) -> torch.Tensor:
def true_fn(x: torch.Tensor):
return x.cos() + x.sin()
def false_fn(x: torch.Tensor):
return x.sin()
return torch.cond(x.shape[0] > 4, true_fn, false_fn, (x,))
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @zou3519 @bdhirsh @yf225 | oncall: pt2,oncall: export,module: pt2-dispatcher | low | Critical |
2,679,160,645 | flutter | Keyboard cover SearchAnchor list results | ### Steps to reproduce
- Run the code example bellow.
- Notice the suggestionsBuilder has 30 `ListTile`, but when scrolling you can't reach the last one which is number 30. Of course when keyboard is visible
### Expected results
Last element in the suggestionsBuilder list should always be visible. even when the keyboard is shown.
In general `resizeToAvoidBottomInset: true` doesn't work in this case.
### Actual results
When the keyboard is visible you can scroll only until the last element of suggestionsBuilder list is at the bottom on the screen, which is wrong because the keyboard is placed on top of them. the scroll should be until the last element reached to the top of keyboard.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
/// Flutter code sample for pinned [SearchAnchor] while scrolling.
void main() {
runApp(const PinnedSearchBarApp());
}
class PinnedSearchBarApp extends StatefulWidget {
const PinnedSearchBarApp({super.key});
@override
State<PinnedSearchBarApp> createState() => _PinnedSearchBarAppState();
}
class _PinnedSearchBarAppState extends State<PinnedSearchBarApp> {
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(
useMaterial3: true, colorSchemeSeed: const Color(0xff6750a4)),
home: Scaffold(
resizeToAvoidBottomInset: false,
body: SafeArea(
child: CustomScrollView(
slivers: <Widget>[
SliverAppBar(
clipBehavior: Clip.none,
shape: const StadiumBorder(),
scrolledUnderElevation: 0.0,
titleSpacing: 0.0,
backgroundColor: Colors.transparent,
floating:
true, // We can also uncomment this line and set `pinned` to true to see a pinned search bar.
title: SearchAnchor.bar(
suggestionsBuilder:
(BuildContext context, SearchController controller) {
return List<Widget>.generate(
30,
(int index) {
return ListTile(
titleAlignment: ListTileTitleAlignment.center,
title: Text('Initial list item $index ghgh'),
);
},
);
},
),
),
// The listed items below are just for filling the screen
// so we can see the scrolling effect.
SliverToBoxAdapter(
child: Padding(
padding: const EdgeInsets.all(20),
child: SizedBox(
height: 100.0,
child: ListView.builder(
scrollDirection: Axis.horizontal,
itemCount: 10,
itemBuilder: (BuildContext context, int index) {
return SizedBox(
width: 100.0,
child: Card(
child: Center(child: Text('Card $index')),
),
);
},
),
),
),
),
SliverToBoxAdapter(
child: Padding(
padding: const EdgeInsets.symmetric(horizontal: 20),
child: Container(
height: 1000,
color: Colors.deepPurple.withOpacity(0.5),
),
),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.1 24B83 darwin-arm64, locale en-MA)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Android Studio (version 2024.2)
[✓] VS Code (version 1.95.3)
[✓] Connected device (2 available)
[✓] Network resources
• No issues found!```
</details>
| a: text input,framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Major |
2,679,169,084 | PowerToys | C:\Users\DELL\AppData\Local\Microsoft\PowerToys\PowerToys Run\Logs\0.85.1.0\2024-11-21.txt | ### Microsoft PowerToys version
0.85.1.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Version: 0.85.1.0
OS Version: Microsoft Windows NT 10.0.22631.0
IntPtr Length: 8
x64: True
Date: 2024/11/21 19:33:41
Exception:
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at System.Collections.Generic.List`1.Enumerator.MoveNext()
at System.Windows.Shell.WindowChromeWorker._WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
at System.Windows.Interop.HwndSource.PublicHooksFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
at System.Windows.Threading.Dispatcher.LegacyInvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs)
at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Needs-Triage | low | Minor |
2,679,184,725 | go | cmd/compile: implement type-based alias analysis | ### Go version
go version devel go1.24-3ca78afb3b Mon Nov 18 04:56:52 2024 +0000 linux/arm64
### Output of `go env` in your module/workspace:
```shell
AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOARCH='arm64'
GOARM64='v8.0'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/home/abokhanko/.cache/go-build'
GODEBUG=''
GOENV='/home/abokhanko/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOGCCFLAGS='-fPIC -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build625133068=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='arm64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/dev/null'
GOMODCACHE='/home/abokhanko/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/abokhanko/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/abokhanko/goroot'
GOSUMDB='sum.golang.org'
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/abokhanko/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/abokhanko/goroot/pkg/tool/linux_arm64'
GOVCS=''
GOVERSION='devel go1.24-3ca78afb3b Mon Nov 18 04:56:52 2024 +0000'
GOWORK=''
PKG_CONFIG='pkg-config'
```
### What did you do?
Test case:
```
package test
func Foo(x **int, y *int) int {
*y = 10
*x = nil
return *y + 20
}
```
Compiled with `go build -a -gcflags='-d ssa/all/dump=Foo' test.go`.
### What did you see happen?
Read of `*y` memory location and thus, the subsequent addition of a constant are not eliminated:
```
$ cat Foo_51__trim.dump
...
(+6) v17 = MOVDload <int> v8 v15 : R1
(6) v19 = ADDconst <int> [20] v17 : R0
(6) v20 = MakeResult <int,mem> v19 v15 : <>
```
### What did you expect to see?
Read of `*y` eliminated; subsequent addition of two constant values (`10`, which is the value of `*y` and `20`) folded. | Performance,NeedsInvestigation,compiler/runtime | low | Critical |
2,679,195,100 | tauri | [bug] tauri ios dev error | ### Describe the bug
I installed tauri2.0beta a few months ago and it worked great. This week I wanted to try the official version of tauri2.0. I installed the latest rust language, then created a Demo with `yarn create tauri-app` command, then created an ios project with `yarn tauri ios init`, and added the `developmentTeam` configuration in tauri.conf.json as follows
```
"iOS": {
"minimumSystemVersion": "13.0",
"developmentTeam": "xxxxx"
}
```
But when I run `yarn tauri ios dev`, it reports an error, the log is as follows
```
Compiling tauri-plugin-shell v2.0.2
Compiling bitfunded v0.1.0 (/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri)
Compiling tauri-macros v2.0.3
error: failed to run custom build command for `tauri v2.1.1`
Caused by:
process didn't exit successfully: `/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/target/debug/build/tauri-5d479c0bbcafc4d9/build-script-build` (exit status: 101)
--- stdout
cargo:rustc-check-cfg=cfg(custom_protocol)
cargo:rustc-check-cfg=cfg(dev)
cargo:rustc-cfg=dev
cargo:dev=true
cargo:rustc-check-cfg=cfg(desktop)
cargo:rustc-check-cfg=cfg(mobile)
cargo:rustc-cfg=mobile
cargo:rustc-link-search=native=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphonesimulator
cargo:rustc-link-search=native=/usr/lib/swift
cargo:rustc-link-lib=clang_rt.iossim
cargo:rustc-link-search=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/16/lib/darwin
--- stderr
Fetching https://github.com/Brendonovich/swift-rs from cache
warning: 'swift-rs': skipping cache due to an error: Couldn’t fetch updates from remote repositories:
fatal: cannot use bare repository '/Users/adam/Library/Caches/org.swift.swiftpm/repositories/swift-rs-16819c90' (safe.bareRepository is 'explicit')
[1/1072] Fetching swift-rs
Fetched https://github.com/Brendonovich/swift-rs from cache (2.24s)
error: Couldn’t get the list of tags:
fatal: cannot use bare repository '/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/target/x86_64-apple-ios/debug/build/tauri-30986d7085224016/out/swift-rs/Tauri/repositories/swift-rs-16819c90' (safe.bareRepository is 'explicit')
thread 'main' panicked at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:281:17:
Failed to compile swift package Tauri
stack backtrace:
0: rust_begin_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:74:14
2: swift_rs::build::SwiftLinker::link
at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:281:17
3: tauri_utils::build::link_swift_library
at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-utils-2.1.0/src/build.rs:25:3
4: tauri_utils::build::link_apple_library
at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-utils-2.1.0/src/build.rs:11:5
5: build_script_build::main
at ./build.rs:323:7
6: core::ops::function::FnOnce::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Failed to run `cargo build`: command ["cargo", "build", "--package", "bitfunded", "--manifest-path", "/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/Cargo.toml", "--target", "x86_64-apple-ios", "--features", "tauri/rustls-tls", "--lib", "--no-default-features"] exited with code 101
Error Failed to run `cargo build`: command ["cargo", "build", "--package", "bitfunded", "--manifest-path", "/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/Cargo.toml", "--target", "x86_64-apple-ios", "--features", "tauri/rustls-tls", "--lib", "--no-default-features"] exited with code 101
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
note: Run script build phase 'Build Rust Code' will be run during every build because the option to run the script phase "Based on dependency analysis" is unchecked. (in target 'bitfunded_iOS' from project 'bitfunded')
** BUILD FAILED **
The following build commands failed:
PhaseScriptExecution Build\ Rust\ Code /Users/adam/Library/Developer/Xcode/DerivedData/bitfunded-etpzsrmfwcqiogavjyjdtpuvzkqo/Build/Intermediates.noindex/bitfunded.build/debug-iphonesimulator/bitfunded_iOS.build/Script-D61F0C1E9C49FED7F014D3FF.sh (in target 'bitfunded_iOS' from project 'bitfunded')
Building workspace bitfunded with scheme bitfunded_iOS and configuration debug
(2 failures)
command ["xcodebuild"] exited with code 65
Error command ["xcodebuild"] exited with code 65
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
### Reproduction
_No response_
### Expected behavior
Run the tauri project normally on the iOS simulator
### Full `tauri info` output
```text
yarn run v1.22.19
$ tauri info
[✔] Environment
- OS: Mac OS 15.1.0 x86_64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-apple-darwin (default)
- node: 20.11.1
- pnpm: 9.0.1
- yarn: 1.22.19
- npm: 10.2.4
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
```
### Stack trace
```
Compiling tauri-plugin-shell v2.0.2
Compiling bitfunded v0.1.0 (/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri)
Compiling tauri-macros v2.0.3
error: failed to run custom build command for `tauri v2.1.1`
Caused by:
process didn't exit successfully: `/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/target/debug/build/tauri-5d479c0bbcafc4d9/build-script-build` (exit status: 101)
--- stdout
cargo:rustc-check-cfg=cfg(custom_protocol)
cargo:rustc-check-cfg=cfg(dev)
cargo:rustc-cfg=dev
cargo:dev=true
cargo:rustc-check-cfg=cfg(desktop)
cargo:rustc-check-cfg=cfg(mobile)
cargo:rustc-cfg=mobile
cargo:rustc-link-search=native=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphonesimulator
cargo:rustc-link-search=native=/usr/lib/swift
cargo:rustc-link-lib=clang_rt.iossim
cargo:rustc-link-search=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/16/lib/darwin
--- stderr
Fetching https://github.com/Brendonovich/swift-rs from cache
warning: 'swift-rs': skipping cache due to an error: Couldn’t fetch updates from remote repositories:
fatal: cannot use bare repository '/Users/adam/Library/Caches/org.swift.swiftpm/repositories/swift-rs-16819c90' (safe.bareRepository is 'explicit')
[1/1072] Fetching swift-rs
Fetched https://github.com/Brendonovich/swift-rs from cache (2.24s)
error: Couldn’t get the list of tags:
fatal: cannot use bare repository '/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/target/x86_64-apple-ios/debug/build/tauri-30986d7085224016/out/swift-rs/Tauri/repositories/swift-rs-16819c90' (safe.bareRepository is 'explicit')
thread 'main' panicked at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:281:17:
Failed to compile swift package Tauri
stack backtrace:
0: rust_begin_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:74:14
2: swift_rs::build::SwiftLinker::link
at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:281:17
3: tauri_utils::build::link_swift_library
at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-utils-2.1.0/src/build.rs:25:3
4: tauri_utils::build::link_apple_library
at /Users/adam/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-utils-2.1.0/src/build.rs:11:5
5: build_script_build::main
at ./build.rs:323:7
6: core::ops::function::FnOnce::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Failed to run `cargo build`: command ["cargo", "build", "--package", "bitfunded", "--manifest-path", "/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/Cargo.toml", "--target", "x86_64-apple-ios", "--features", "tauri/rustls-tls", "--lib", "--no-default-features"] exited with code 101
Error Failed to run `cargo build`: command ["cargo", "build", "--package", "bitfunded", "--manifest-path", "/Users/adam/Desktop/TauriV2OfficialDemo/Bitfunded/src-tauri/Cargo.toml", "--target", "x86_64-apple-ios", "--features", "tauri/rustls-tls", "--lib", "--no-default-features"] exited with code 101
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
note: Run script build phase 'Build Rust Code' will be run during every build because the option to run the script phase "Based on dependency analysis" is unchecked. (in target 'bitfunded_iOS' from project 'bitfunded')
** BUILD FAILED **
The following build commands failed:
PhaseScriptExecution Build\ Rust\ Code /Users/adam/Library/Developer/Xcode/DerivedData/bitfunded-etpzsrmfwcqiogavjyjdtpuvzkqo/Build/Intermediates.noindex/bitfunded.build/debug-iphonesimulator/bitfunded_iOS.build/Script-D61F0C1E9C49FED7F014D3FF.sh (in target 'bitfunded_iOS' from project 'bitfunded')
Building workspace bitfunded with scheme bitfunded_iOS and configuration debug
(2 failures)
command ["xcodebuild"] exited with code 65
Error command ["xcodebuild"] exited with code 65
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
### Additional context
My development environment is MacBook Pro, Intel chip, MacOS15.1 Xcode version 16.1 | type: bug,status: upstream,platform: macOS,status: needs triage | low | Critical |
2,679,201,719 | storybook | "View external storybook" & "Read composition docs" controls are not accessible using keyboard in Windows and Mac. | ### Describe the bug
Title: "View external storybook" & "Read composition docs" controls are not accessible using keyboard in Windows and Mac.
User Impact:
Keyboard users, including those with motor disabilities or who rely on assistive technologies, will be unable to interact with these controls, impacting their ability to access important documentation and resources and therefore they will not be able to complete their desired task.
Actual Result:
"View external storybook" & "Read composition docs" controls are not accessible through keyboard in Windows and Mac.
Refer Attachment: MAC_View external storybook & Read composition docs controls are not accessible using keyboard

https://github.com/user-attachments/assets/ddfa40cb-e388-4a89-87d5-8f5cb05e93a8
Expected Result:
"View external storybook" & "Read composition docs" controls should be accessible through keyboard using tab key/up and down arrow keys in Windows and Mac.
### Reproduction link
About / Introduction - Page ⋅ Storybook (ambitious-cliff-0c8148010.2.azurestaticapps.net)
### Reproduction steps
1. Open URL: [About / Introduction - Page ⋅ Storybook (ambitious-cliff-0c8148010.2.azurestaticapps.net)](https://ambitious-cliff-0c8148010.2.azurestaticapps.net/?path=/story/about-introduction--page) in latest edge browser. Home page will be displayed
2. Press tab key to move to the icon beside 'React Wrappers' control and activate it using enter key.
3. Verify on pressing tab keys whether the "View external storybook" & "Read composition docs" controls are accessible through keyboard or not.
### System
```bash
Test Environment:
OS: Windows 11 Version 24H2 OS build 26100.1742
Browser: Microsoft New Edge Version129.0.2792.52(Official build) (64-bit)
Product: Horizon Framework
URL: About / Introduction - Page ⋅ Storybook (ambitious-cliff-0c8148010.2.azurestaticapps.net)
Mac Test Environment:
OS: macOS Sequoia 15.0.1
Browser: Microsoft New Edge Version130.0.2849.56 (Official build) (64-bit)
Product: Horizon Framework
URL: About / Introduction - Page ⋅ Storybook (ambitious-cliff-0c8148010.2.azurestaticapps.net)
```
### Additional context
_No response_ | bug,help wanted,accessibility | low | Critical |
2,679,211,504 | go | os: TestRootConsistencyLstat/dotdot_in_path_after_symlink failures [consistent failure] | ```
#!watchflakes
default <- pkg == "os" && test == "TestRootConsistencyLstat/dotdot_in_path_after_symlink"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730669498615587937)):
=== RUN TestRootConsistencyLstat/dotdot_in_path_after_symlink
root_test.go:793: with root: res=
root_test.go:794: err=statat a/../target: inappropriate file type or format
root_test.go:795: without root: res=name:"target" size:8 mode:-rw-r--r-- isdir:false
root_test.go:796: err=<nil>
root_test.go:797: want consistent results, got mismatch
root_test.go:807: without root, expected PathError; got: statat a/../target: inappropriate file type or format
--- FAIL: TestRootConsistencyLstat/dotdot_in_path_after_symlink (0.01s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,679,259,007 | react | Cannot read properties of undefined (reading 'getStackAddendum') | After building the application on the production version I get the error Cannot read properties of undefined (reading 'getStackAddendum'). I have split the application into microfronts that are linked using the webpack module federation.
The error here is because ReactDebugCurrentFrame is undefined.
The problem does not appear in the dev version, only in the production version
```
function printWarning(level, format, args) {
{
var ReactDebugCurrentFrame = ReactSharedInternals.ReactDebugCurrentFrame;
var stack = ReactDebugCurrentFrame.getStackAddendum();
if (stack !== '') {
format += '%s';
args = args.concat([stack]);
} // eslint-disable-next-line react-internal/safe-string-coercion
var argsWithFormat = args.map(function (item) {
return String(item);
}); // Careful: RN currently depends on this prefix
argsWithFormat.unshift('Warning: ' + format); // We intentionally don't use spread (or .apply) directly because it
// breaks IE9: https://github.com/facebook/react/issues/13610
// eslint-disable-next-line react-internal/no-production-logging
Function.prototype.apply.call(console[level], console, argsWithFormat);
}
}
```
package.json application shell
```
{
"name": "wes-shell-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"@cloudbeds/webpack-module-federation-types-plugin": "^1.18.0",
"@emotion/react": "^11.11.4",
"@emotion/styled": "^11.11.5",
"@joint/core": "^4.0.4",
"@mui/icons-material": "^5.11.6",
"@mui/lab": "^5.0.0-alpha.125",
"@mui/material": "^5.15.21",
"@mui/x-charts": "^7.8.0",
"@mui/x-date-pickers": "^5.0.18",
"@testing-library/jest-dom": "^5.16.5",
"@testing-library/react": "^13.4.0",
"@testing-library/user-event": "^14.4.3",
"@types/react": "^18.0.27",
"@types/react-dom": "^18.0.10",
"chart.js": "^4.4.3",
"color": "^4.2.3",
"date-fns": "^2.29.3",
"history": "^5.3.0",
"js-cookie": "^3.0.5",
"mobx": "^6.7.0",
"mobx-react": "^7.6.0",
"mobx-utils": "^6.0.5",
"notistack": "^3.0.1",
"path-to-regexp": "^6.2.1",
"react": "^18.2.0",
"react-chartjs-2": "^5.2.0",
"react-custom-scrollbars-2": "^4.5.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.13.0",
"react-scripts": "^5.0.1",
"typescript": "^4.9.4",
"web-vitals": "^3.0.4"
},
"scripts": {
"dev": "npm run make-types && npm run generate-env && craco start --verbose",
"make-types": "make-federated-types -o public/@types",
"generate-env": "cd .. && env.sh && cd wes-shell-app",
"serve": "serve -s build",
"mock-api": "node \"src/mocks/api.js\"",
"build": "craco build",
"test": "craco test",
"devcert": "set HTTPS=true&&set SSL_CRT_FILE=../wes-shell-app/cert.crt&&set SSL_KEY_FILE=../wes-shell-app/cert.key"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"@craco/craco": "^7.0.0",
"@types/color": "^3.0.6",
"@types/lodash": "^4.14.191",
"connect-api-mocker": "^1.10.0",
"express": "^4.18.2",
"i18next": "^22.5.1",
"i18next-browser-languagedetector": "^7.1.0",
"jwt-decode": "^3.1.2",
"lodash": "^4.17.21",
"react-barcode-reader": "^0.0.2",
"react-i18next": "^12.1.5",
"tailwindcss": "^3.3.3",
"zod": "^3.20.2"
}
}
```
package.json second aplication
```
{
"name": "wes-voice-picking-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"@cloudbeds/webpack-module-federation-types-plugin": "^1.18.0",
"@emotion/react": "^11.11.4",
"@emotion/styled": "^11.11.5",
"@mui/icons-material": "^5.11.16",
"@mui/material": "^5.15.21",
"@mui/x-charts": "^7.8.0",
"@testing-library/jest-dom": "^5.16.5",
"@testing-library/react": "^13.4.0",
"@testing-library/user-event": "^14.4.3",
"external-remotes-plugin": "^1.0.0",
"mobx": "^6.7.0",
"mobx-react": "^7.6.0",
"mobx-utils": "^6.0.5",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.13.0",
"react-scripts": "5.0.1",
"typescript": "^4.9.4",
"web-vitals": "^3.0.4"
},
"scripts": {
"dev": "npm run generate-env && craco start --verbose",
"generate-env": "cd .. && env.sh && cd wes-voice-picking-app",
"serve": "serve -s build -p 3003",
"build": "craco build",
"test": "craco test"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"@craco/craco": "^7.0.0",
"i18next": "^23.2.11",
"i18next-browser-languagedetector": "^7.1.0",
"react-i18next": "^13.0.2",
"tailwindcss": "^3.3.3"
}
}
```
| Status: Unconfirmed,Resolution: Needs More Information | low | Critical |
2,679,259,662 | next.js | The children of parallel routes page is displayed incorrectly when using browser navigation | ### Link to the code that reproduces this issue
https://github.com/vercel/next-app-router-playground
### To Reproduce
1. Open the url: https://app-router.vercel.app/patterns/breadcrumbs
2. The [Home] tab's children is 'Shared server-side UI that depends ....'
3. Click the [Electronics] tab, this tab's children is a sub breadcrumbs and a box list
4. Click the browser 's back navigation button
5. The [Home] tab's children is the box list of [Electronics] tab
<img width="321" alt="截屏2024-11-21 21 10 02" src="https://github.com/user-attachments/assets/074102a6-9556-406f-a227-6605c7c26c7a">
<img width="318" alt="截屏2024-11-21 21 10 10" src="https://github.com/user-attachments/assets/cfe07bc0-9e7a-4d05-a666-863ccdf74d59">
<img width="418" alt="截屏2024-11-21 21 09 54" src="https://github.com/user-attachments/assets/f6f7e406-60b6-48d8-9a39-0ffb6bc78e5b">
### Current vs. Expected behavior
After clicking the browser's back button, [Home] tab should show children correctly.
This problem can only be reproduced in the production environment, and cannot be reproduced in the development environment
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: 9.4.0
Relevant Packages:
next: 15.0.4-canary.21 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 19.0.0-rc-65a56d0e-20241020
react-dom: 19.0.0-rc-65a56d0e-20241020
typescript: 5.5.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | bug | low | Minor |
2,679,269,613 | bitcoin | Avoid internet traffic from tests | ### Current behaviour
Tests should not try to open connections to the internet because:
* they may succeed or fail unpredictably, depending on external environment
* are slow
* dox the developer to their ISP that they are running Bitcoin Core tests
### Expected behaviour
Tests should only open local connections (e.g. on the `lo` interface).
Enforce this in the CI, having it to detect non-local traffic and fail accordingly.
### Steps to reproduce
Run the functional tests and monitor the traffic.
### Relevant log output
_No response_
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
master@22ef95dbe3e467039e6cd18988e66557d94041d1
### Operating system and version
Ubuntu 28.04 LTS
---
This is being fixed in two parts:
* [x] Fix all tests as of Jan 15 2025: https://github.com/bitcoin/bitcoin/pull/31646
* [ ] Detect future regressions in CI: https://github.com/bitcoin/bitcoin/pull/31349 | Tests | low | Major |
2,679,309,310 | pytorch | Potential decode errors using the MSVC compiler in windows platforms | ### 🐛 Describe the bug
Problem description:
I am using unofficially supported triton to enable the torch.complite() feature on Windows.
https://github.com/woct0rdho/triton-windows
As far as I know, it attempts to compile using MSVC on Windows.
I encountered the same error:https://github.com/pytorch/pytorch/issues/130668
I guess this is because MSVC uses the utf16 encoding format as the console output.
See:https://learn.microsoft.com/zh-cn/cpp/build/reference/unicode-support-in-the-compiler-and-linker?view=msvc-170
For example, it runs on my personal computer:
```
PS C:\> cl
用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.42.34433 版
版权所有(C) Microsoft Corporation。保留所有权利。
用法: cl [ 选项... ] 文件名... [ /link 链接选项... ]
```
A search on github shows three places where this code is used:
repo:pytorch/pytorch SUBPROCESS_DECODE_ARGS
This is the main code that causes errors:
https://github.com/pytorch/pytorch/blob/ecf3bae40a6f2f0f3b237bde1fc4b2492765ab13/torch/_inductor/cpp_builder.py#L62
Changing utf8 to cp936 could solve this problem, but more extensive testing is needed, and we need to start thinking about multi-coding support at the code level in preparation for future windows support.
I investigated a merge to 2020, also to address non-English decoding errors:
https://github.com/pytorch/pytorch/pull/49020
https://github.com/pytorch/pytorch/blob/ecf3bae40a6f2f0f3b237bde1fc4b2492765ab13/torch/utils/cpp_extension.py#L45
https://github.com/pytorch/pytorch/blob/ecf3bae40a6f2f0f3b237bde1fc4b2492765ab13/tools/dynamo/verify_dynamo.py#L46
According to this document, I think it is OK to set it to oem or ansi. I think "oem" is for supporting older hardware.
https://docs.python.org/zh-cn/3/library/codecs.html#text-encodings
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 专业版 (10.0.22631 64 位)
GCC version: (x86_64-win32-seh-rev0, Built by MinGW-Builds project) 14.2.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: N/A
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i5-12500H
Manufacturer: GenuineIntel
Family: 205
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2500
MaxClockSpeed: 2500
L2CacheSize: 9216
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] lion-pytorch==0.2.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.20.1
[pip3] onnxscript==0.1.0.dev20241102
[pip3] onnxsim==0.4.36
[pip3] torch==2.5.1+cu124
[pip3] torch_tensorrt==2.5.0
[pip3] torchaudio==2.5.1+cu124
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] lion-pytorch 0.2.2 pypi_0 pypi
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.5.1+cu124 pypi_0 pypi
[conda] torch-tensorrt 2.5.0 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | module: windows,triaged,oncall: pt2,module: inductor | low | Critical |
2,679,317,687 | svelte | Add aria-description to HTML Attributes | ### Describe the bug
Using `aria-description` triggers the `unknown property` warning.
Aria description is global and should apply to all HTML tags: https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Attributes#global_aria_attributes
Aria-description serves the same purpose as aria-described. However, the aria-description property should only be used when providing a visible description is not the desired user experience.
This is a regression since it should have been fixed here: https://github.com/sveltejs/svelte/issues/7301
### Reproduction
Just use aria-description to an HTML element and observe the compiler warning.
```
<div aria-description="hello">
World
</div>
```

### Logs
```shell
Object literal may only specify known properties, and '"aria-description"' does not exist in type 'HTMLProps<"div", HTMLAttributes<any>>'.js(2353)
```
### System Info
```shell
System:
OS: Linux 6.8 Ubuntu 24.04.1 LTS 24.04.1 LTS (Noble Numbat)
CPU: (32) x64 13th Gen Intel(R) Core(TM) i9-13900K
Memory: 24.94 GB / 31.05 GB
Container: Yes
Shell: 5.2.21 - /bin/bash
Binaries:
Node: 20.17.0 - ~/.nvm/versions/node/v20.17.0/bin/node
npm: 10.8.2 - ~/.nvm/versions/node/v20.17.0/bin/npm
pnpm: 9.8.0 - ~/.local/share/pnpm/pnpm
Browsers:
Chrome: 131.0.6778.85
npmPackages:
svelte: 5.0.0 => 5.0.0
```
### Severity
annoyance | types / typescript,a11y | low | Critical |
2,679,319,461 | next.js | Backport request: Server action that fails in middleware fails silently | ### Link to the code that reproduces this issue
https://github.com/nphmuller/next-server-action-middleware-error
### To Reproduce
1. `npm run dev`
2. Click button
Code from repro:
middleware.ts:
```
export default async function Middleware(request: NextRequest) {
if (request.method === "POST") {
return NextResponse.json("Example: token invalid", { status: 401 });
}
}
```
page.tsx:
```
export default function Home() {
return (
<button
onClick={async () => {
try {
await actionThatThrowsViaMiddleware();
} catch {
console.log("Caught error");
}
}}
>
Click for action that throws
</button>
);
}
```
actions.ts:
```
"use server";
export const actionThatThrowsViaMiddleware = async () => {};
```
### Current vs. Expected behavior
Expected: Console logs `Caught error`
Actual Next 14: No console log
Actual Next 15: Console logs `Caught error`
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.13.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.6.0
Relevant Packages:
next: 14.2.18 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: 15.0.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
⚠ An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Middleware, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
This has been fixed sometime during Next 15 development. Some of my apps are currently stuck on Next 14 (due to dependencies that are still incompatible with React 19), so it would be really nice to have this fix in Next 14. | bug,Middleware,Runtime | low | Critical |
2,679,344,531 | vscode | Disabling smooth scrolling doesn't work | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Linux x64 6.11.5-1-default
OS: opensuse tumbleweed
package: from zypper code-1.95.3-1731513157.el8.x86_64
Steps to Reproduce:
1. code --disable-extensions --disable-smooth-scrolling
2. configuration:
```
{
"workbench.colorTheme": "Solarized Dark",
"editor.unicodeHighlight.nonBasicASCII": false,
"[python]": {
"editor.formatOnType": true
},
"search.useGlobalIgnoreFiles": true,
"search.useIgnoreFiles": true,
"search.exclude": {
"**": true
},
"python.defaultInterpreterPath": "/home/ashaposhnikov/local.venv",
"[sql]": {
"editor.defaultFormatter": "adpyke.vscode-sql-formatter"
},
"files.watcherExclude": {
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/node_modules/*/**": true
},
"editor.smoothScrolling": false,
"window.zoomLevel": -1
}
```
3. scroll in text editor using mouse wheel
Some time ago disabling smooth scrolling worked as expected, scrolling was instant.
Now scrolling animation exists (which is smooth scrolling in my understanding) no matter the position of setting.
I tried setting window.zoomLevel -1, 0, 1 and 2 without success.
| bug,editor-scrollbar | low | Critical |
2,679,351,832 | go | x/tools/gopls/internal/test: spurious failures due to EBADF on netbsd/arm | ```
#!watchflakes
default <- pkg ~ `golang.org/x/tools/gopls/internal/test` && `bad file descriptor` && goos == "netbsd" && goarch == "arm"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730748669696124305)):
=== RUN TestPackageCompletion/package_completion_on_terminal_newline/default
completion_test.go:220: completion item mismatch (-want +got):
[]string{
- "package apple",
- "package apple_test",
"package fruits",
"package fruits_test",
"package main",
}
...
[Trace - 15:23:48.479 PM] Received notification 'textDocument/publishDiagnostics'.
Params: {"uri":"file:///home/swarming/.swarming/w/ir/x/t/gopls-test-1475936701/TestPackageCompletion/package_completion_on_terminal_newline/default/work/fruits/testfile6.go","version":1,"diagnostics":[]}
[Trace - 15:23:48.486 PM] Received notification '$/progress'.
Params: {"token":"8761737497789441884","value":{"kind":"end","message":"Done."}}
#### End Gopls Test Logs for "TestPackageCompletion/package_completion_on_terminal_newline/default"
--- FAIL: TestPackageCompletion/package_completion_on_terminal_newline/default (1.40s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| OS-NetBSD,NeedsInvestigation,Tools | low | Critical |
2,679,429,883 | rust | Nightly-only cfg gating like `#[cfg(target_has_atomic)]` is confusing | EDIT: this is *intended* that `cfg(target_has_atomic)` is nightly-only (not feature-gated), but I find it still very confusing.
---
Note that `#[cfg(target_has_atomic)]` is not to be confused with `#[cfg(target_has_atomic = "8")]`.
Given a snippet (just `rustc --crate-type="lib"`, no additional `--cfg` flags)
```rs
#[cfg(target_has_atomic)]
fn foo() {}
```
- On stable 1.82.0 this produces no warnings.
- On nightly 2024-11-20 this produces a `dead_code` warning indicating `target_has_atomic` cfg is enabled.
```
warning: function `foo` is never used
--> src/lib.rs:2:4
|
2 | fn foo() {}
| ^^^
|
= note: `#[warn(dead_code)]` on by default
```
AFAICT there is no test coverage for the intended behavior of *just* `#[cfg(target_has_atomic)]` alone, other test coverage is mostly for check-cfg.
cc @Urgau since you worked on builtin-cfg rejection do you know anything about if this is intended, and if so, what is the intended behavior here? | C-enhancement,A-stability,T-compiler,requires-nightly,A-cfg | low | Major |
2,679,463,220 | pytorch | Enable CI and Build Support for PyTorch on PPC64LE Architecture | **Description:**
We propose extending PyTorch support to the POWER/PPC64LE architecture to facilitate builds and CI workflows tailored to this platform.
**Background:**
1. We forked the pytorch/pytorch repository and successfully generated and tested wheels using a self-hosted CI runner on an OSU PPC64LE machine.
2. The changes in our forked repository include:
**Workflows:** Added a new job for PPC64LE in .github/workflows/ppc64le.yaml.
**Dockerfiles:** Created .github/workflows/dockerfile.ppc64le.
**Build Script:** Added .github/scripts/ppc64le-build.sh.
These changes were part of a proof-of-concept (POC) to demonstrate feasibility.
3. Currently, we are working on incorporating these modifications into common files such as .github/workflows/_linux-build.yml to align with existing workflows.
**Fork Information:**
**Repository:** [sandeepgupta12/pytorch](https://github.com/sandeepgupta12/pytorch/tree/using-dockerfile-buildscript-first-success).
**Request:**
We seek your input on:
- The overall feasibility and design of the proposed changes.
- Best practices for integrating architecture-specific workflows.
- Any additional considerations for upstreaming support for PPC64LE.
**Additional Information:**
We are ready to provide any further information or assistance required to facilitate this integration. Please let us know if there are additional steps or requirements to move forward.
cc @malfet @seemethere | module: build,triaged,enhancement,module: POWER | low | Minor |
2,679,478,244 | next.js | I can't run "next-translate" example. Page 404 error. | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
# This is a non-commercial version of StackBlitz.
# If you’re using this for business purposes, please purchase a license here.
~/projects/kkviljzkwp.github
❯ npm install && npx next dev
added 37 packages in 2s
8 packages are looking for funding
run `npm fund` for details
▲ Next.js 15.0.3
- Local: http://localhost:3000
✓ Starting...
Downloading swc package @next/swc-wasm-nodejs... to /home/.cache/next-swc
✓ Ready in 13.3s
○ Compiling /_not-found ...
<w> [webpack.cache.PackFileCacheStrategy] Caching failed for pack: Error: Can't resolve 'tunnel-agent' in '/home/projects/kkviljzkwp.github/node_modules/sharp'
<w> while resolving 'tunnel-agent' in /home/projects/kkviljzkwp.github/node_modules/sharp to a directory
✓ Compiled /_not-found in 7.4s (571 modules)
next-translate - compiled page: / - locale: en - namespaces: common - used loader: server /layout
GET / 404 in 7755ms
<w> [webpack.cache.PackFileCacheStrategy] Caching failed for pack: Error: Can't resolve 'tunnel-agent' in '/home/projects/kkviljzkwp.github/node_modules/sharp'
<w> while resolving 'tunnel-agent' in /home/projects/kkviljzkwp.github/node_modules/sharp to a directory
<w> [webpack.cache.PackFileCacheStrategy] Caching failed for pack: Error: Can't resolve 'tunnel-agent' in '/home/projects/kkviljzkwp.github/node_modules/sharp'
<w> while resolving 'tunnel-agent' in /home/projects/kkviljzkwp.github/node_modules/sharp to a directory
```
### Which example does this report relate to?
with-next-translate
### What browser are you using? (if relevant)
Chrome 131.0.6778.70
### How are you deploying your application? (if relevant)
preview live with StackBlitz
### Describe the Bug
I ran the sample program but it showed a 404 error which I don't think is reasonable.
### Expected Behavior
The sample screen is displayed and the translation function runs normally.
### To Reproduce
If you use the StackBlitz deployment in the README, a failure screen will appear. | Upstream,examples | low | Critical |
2,679,497,733 | vscode | Terminal suggest: Support resolving environment variables in directories | This is somewhat shell-dependent:
- We want `cd $HOME` to show the actual home directory on the right side. Currently our pwsh support doesn't do this:

But it does resolve the actual path and show it for `.` and `..`:

- We want `cd $HOME/` to provide the same completions as `cd ~/` for the resolved `$HOME` folder
- This is not just for `$HOME` but for any environment variable
- Different shells may have different syntax for this, but the above is the baseline which works in the majority of shells (including pwsh). Interestingly `cd ${env:HOME}` works when run in pwsh but `cd ${env:HOME}\source` and `cd "${env:HOME}\source"` do not | feature-request,terminal-suggest | low | Minor |
2,679,530,753 | transformers | Offline mode doesn't work with models that require `trust_remote_code=True` | ### System Info
Google Colab:
- `transformers` version: 4.46.3
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (False)
- Tensorflow version (GPU?): 2.17.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.5 (cpu)
- Jax version: 0.4.33
- JaxLib version: 0.4.33
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
#### Issue
Models that require `trust_remote_code=True` can't be fully saved & loaded with `save_pretrained()` + `from_pretrained()`.
In [offline mode](https://huggingface.co/docs/transformers/v4.46.2/en/installation#offline-mode) on a new machine during calling `from_pretrained()` it doesn't locate all required files in the saved local dir and tries to reach out to hf hub for the remote code part.
#### How to reproduce
[Colab](https://colab.research.google.com/drive/1_yX94aJngMChp6pANWg6Z6F_9G8Kyb3k?usp=sharing) | [Kaggle](https://www.kaggle.com/code/vladyslavkha/hf-offline-mode-trust-remote-code-models-issue)
Tested with popular [`jinaai/jina-embeddings-v3`](https://huggingface.co/jinaai/jina-embeddings-v3)
Includes step by step reproduction + results
---
## Additional context
- Stumbled on this in Kaggle Notebook env for competition.
Some Kaggle competitions require submitting code in Kaggle Notebooks, which are run later on private data and **don't allow internet access**.
Practically, this means you must prepare all models in advance, upload them as dependencies to the submission notebook.
So having transformers trying to reach out to hf-hub (when the model is already pre-downloaded) is not an option and disqualifies a group of models from usage.
- Originally raised in https://github.com/UKPLab/sentence-transformers/issues/2613.
Received guidance by @tomaarsen in https://github.com/UKPLab/sentence-transformers/issues/2613#issuecomment-2076964416 to overcome the issue with `sentence-transformers` (code snippet included)
- Raising as a bug report, LMK if better to reraise as FR. Would be happy to try to contribute if confirmed
### Expected behavior
Model is fully saved in local dir with `save_pretrained()` and can be fully loaded from a local path with `from_pretrained()` in offline mode | bug | low | Critical |
2,679,532,284 | vscode | [Remote-SSH] Extension Host with Copilot Fails on VS Code 1.95.3 but Works on 1.93.0 |
Type: <b>Bug</b>
### Description
We are encountering an issue with the GitHub Copilot extension when using Visual Studio Code with Remote-SSH on an HPC login node. The extension works flawlessly with VS Code 1.93.0 (4849ca9bdf9666755eb463db297b69e5385090e3) but fails on VS Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813). Interestingly, VS Code 1.95.3 works fine on the computing node of the same cluster, accessed via SSH forwarding from the login node.
To eliminate variables, we have tested the setup thoroughly:
- Two personal computers were used:
- One running VS Code 1.93.0. (We didn't record the version of the extensions when we were testing, and VS Code 1.93.0 auto-updated to 1.95.3 when we tried to check the versions. Please tell us if this information is really needed. **Update:** Might be `1.245.0` / `0.20.3` from the log file `~/.vscode-server/data/logs/20241121T175642/remoteagent.log`)
- The other running VS Code 1.95.3, Copilot 1.245.0 and Copilot Chat 0.22.4.
- Both connect to the same HPC login node and same account using Remote-SSH.
- Before each login, the `.vscode-server` directory on the remote machine was cleaned up to ensure a fresh environment.
- On the remote machine, GitHub Copilot and GitHub Copilot Chat were the only extensions installed.
### Expected Behavior
The GitHub Copilot extension should initialize and function correctly on both VS Code versions.
### Actual Behavior
- On VS Code 1.95.3, the Copilot extension causes the Extension Host process to crash only on the login node.
- The issue does not occur on the computing node or when using VS Code 1.93.0 on the login node.
- Disabling the GitHub Copilot extensions eliminates the issue.
### Test by VS Code Bisect
Done on our login node with VS Code 1.95.3. Result:
`Extension Bisect is done and has identified github.copilot as the extension causing the problem.`
Before, we don't know that it is possible to disable GitHub Copilot alone but not GitHub Copilot Chat.
Later, we found that **when VS Code bisect disable GitHub Copilot 1.245.0 alone, GitHub Copilot Chat 0.22.4 works normally** on our login node with VS Code 1.95.3.
### Logs
Below are logs captured from the Extension Host (remote) process on both the login and computing nodes running VS Code 1.95.3. These logs highlight differences observed during initialization:
#### Login Node (Fails)
```
2024-11-21 16:34:43.523 [trace] ExtHostCommands#registerCommand github.copilot.openLogs
2024-11-21 16:34:43.523 [trace] ExtHostCommands#registerCommand github.copilot.signIn
2024-11-21 16:34:43.530 [trace] ExtensionService#_callActivateOptional GitHub.copilot-chat
2024-11-21 16:34:43.555 [trace] extHostWorkspace#findFiles2: fileSearch, extension: GitHub.copilot-chat, entryPoint: findFiles2
2024-11-21 16:34:43.556 [trace] ProxyResolver#tls.connect [{"highWaterMark":16384,"servername":"default.exp-tas.com","session":"null","localAddress":"null","ALPNProtocols":"http/1.1","port":443,"host":"default.exp-tas.com"}]
2024-11-21 16:34:43.562 [debug] ProxyResolver#resolveProxy unconfigured http://169.254.169.254/metadata/instance/compute DIRECT
2024-11-21 16:34:43.693 [trace] ProxyResolver#tls.connect [443, "default.exp-tas.com", {"servername":"default.exp-tas.com","ALPNProtocols":"h2,http/1.1,http/1.0","signal":"[object AbortSignal]","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:34:43.723 [trace] ProxyResolver#tls.connect [443, "api.github.com", {"servername":"api.github.com","ALPNProtocols":"h2,http/1.1,http/1.0","signal":"[object AbortSignal]","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:34:43.760 [debug] ExtHostSearch /work1/user141421/.vscode-server/cli/servers/Stable-f1a4fb101478ce6ec82fe9627c43efbf9e98c813/server/node_modules/@vscode/ripgrep/bin/rg --files --hidden --case-sensitive --no-require-git -g '!**/.git' -g '!**/.svn' -g '!**/.hg' -g '!**/CVS' -g '!**/.DS_Store' -g '!**/Thumbs.db' -g '!**/node_modules' -g '!**/bower_components' -g '!**/*.code-search' --no-ignore-parent --follow --no-config --no-ignore-global
- cwd: /home/user141421
- Sibling clauses: {}
2024-11-21 16:34:43.964 [trace] ProxyResolver#tls.connect [443, "api.github.com", {"servername":"api.github.com","ALPNProtocols":"h2,http/1.1,http/1.0","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:34:43.998 [trace] ExtHostCommands#registerCommand github.copilotChat.signIn
[skip some lines]
2024-11-21 16:34:44.005 [trace] ExtHostCommands#registerCommand github.copilot.buildLocalWorkspaceIndex
2024-11-21 16:34:44.020 [debug] ExtHostSearch Search finished. Stats: {"cmdTime":266,"fileWalkTime":266,"directoriesWalked":0,"filesWalked":0,"cmdResultCount":28852}
2024-11-21 16:34:44.020 [debug] Ext host file search time: 266ms
2024-11-21 16:34:44.174 [trace] ExtHostCommands#registerCommand codereferencing.showOutputPane
2024-11-21 16:34:44.176 [trace] ProxyResolver#tls.connect [443, "copilot-telemet[39 chars]", {"servername":"copilot-telemet[39 chars]","ALPNProtocols":"h2,http/1.1,http/1.0","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:34:44.215 [trace] ExtHostCommands#executeCommand setContext
2024-11-21 16:34:44.215 [trace] ExtHostCommands#executeCommand _setContext
2024-11-21 16:34:44.215 [trace] ExtHostCommands#registerCommand github.copilot.generate
2024-11-21 16:34:44.215 [trace] ExtHostCommands#registerCommand github.copilot.acceptCursorPanelSolution
2024-11-21 16:34:44.215 [trace] ExtHostCommands#registerCommand github.copilot.previousPanelSolution
2024-11-21 16:34:44.215 [trace] ExtHostCommands#registerCommand github.copilot.nextPanelSolution
2024-11-21 16:34:44.216 [trace] ExtHostCommands#registerCommand _github.copilot.ghostTextPostInsert
[Process Crash]
```
#### Computing Node (Works)
```
2024-11-21 16:30:49.433 [trace] ExtHostCommands#registerCommand github.copilot.openLogs
2024-11-21 16:30:49.433 [trace] ExtHostCommands#registerCommand github.copilot.signIn
[Compare to above, no extra message here.]
2024-11-21 16:30:49.454 [trace] ExtHostCommands#registerCommand github.copilotChat.signIn
[skip the same lines as above]
2024-11-21 16:30:49.459 [trace] ExtHostCommands#registerCommand github.copilot.buildLocalWorkspaceIndex
2024-11-21 16:30:49.664 [trace] ProxyResolver#tls.connect [443, "default.exp-tas.com", {"servername":"default.exp-tas.com","ALPNProtocols":"h2,http/1.1,http/1.0","signal":"[object AbortSignal]","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:30:49.694 [trace] ProxyResolver#tls.connect [443, "api.github.com", {"servername":"api.github.com","ALPNProtocols":"h2,http/1.1,http/1.0","signal":"[object AbortSignal]","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:30:49.981 [trace] ProxyResolver#tls.connect [443, "api.github.com", {"servername":"api.github.com","ALPNProtocols":"h2,http/1.1,http/1.0","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:30:50.116 [trace] ExtHostCommands#registerCommand codereferencing.showOutputPane
2024-11-21 16:30:50.118 [trace] ProxyResolver#tls.connect [443, "copilot-telemet[39 chars]", {"servername":"copilot-telemet[39 chars]","ALPNProtocols":"h2,http/1.1,http/1.0","rejectUnauthorized":true,"ca":"[281 certs]"}]
2024-11-21 16:30:50.156 [trace] ExtHostCommands#executeCommand setContext
2024-11-21 16:30:50.156 [trace] ExtHostCommands#executeCommand _setContext
2024-11-21 16:30:50.156 [trace] ExtHostCommands#registerCommand github.copilot.generate
2024-11-21 16:30:50.156 [trace] ExtHostCommands#registerCommand github.copilot.acceptCursorPanelSolution
2024-11-21 16:30:50.156 [trace] ExtHostCommands#registerCommand github.copilot.previousPanelSolution
2024-11-21 16:30:50.156 [trace] ExtHostCommands#registerCommand github.copilot.nextPanelSolution
2024-11-21 16:30:50.156 [trace] ExtHostCommands#registerCommand _github.copilot.ghostTextPostInsert
2024-11-21 16:30:50.176 [debug] ProxyResolver#loadSystemCertificates count 137
2024-11-21 16:30:50.191 [debug] ProxyResolver#loadSystemCertificates count filtered 134
2024-11-21 16:30:50.191 [debug] ProxyResolver#resolveProxy unconfigured https://mobile.events.data.microsoft.com/OneCollector/1.0?cors=true&content-type=application/x-json-stream DIRECT
2024-11-21 16:30:50.192 [trace] ProxyResolver#tls.connect [{"protocol":"https:","hostname":"mobile.events.d[32 chars]","port":443,"path":"null","method":"POST","headers":"[object Object]","agent":"[object Object]","_defaultAgent":"[object Object]","host":"mobile.events.d[32 chars]","lookupProxyAuthorization":"[Function: bound dz]","noDelay":true,"servername":"mobile.events.d[32 chars]","secureEndpoint":true,"_vscodeAdditionalCaCerts":"[134 certs]","keepAlive":true,"scheduling":"lifo","timeout":5000,"_agentKey":"mobile.events.d[57 chars]","encoding":"null","keepAliveInitialDelay":1000}]
[Process keep running without crashing]
2024-11-21 16:30:50.176 [debug] ProxyResolver#loadSystemCertificates count 137
2024-11-21 16:30:50.191 [debug] ProxyResolver#loadSystemCertificates count filtered 134
2024-11-21 16:30:50.191 [debug] ProxyResolver#resolveProxy unconfigured https://mobile.events.data.microsoft.com/OneCollector/1.0?cors=true&content-type=application/x-json-stream DIRECT
2024-11-21 16:30:50.192 [trace] ProxyResolver#tls.connect [{"protocol":"https:","hostname":"mobile.events.d[32 chars]","port":443,"path":"null","method":"POST","headers":"[object Object]","agent":"[object Object]","_defaultAgent":"[object Object]","host":"mobile.events.d[32 chars]","lookupProxyAuthorization":"[Function: bound dz]","noDelay":true,"servername":"mobile.events.d[32 chars]","secureEndpoint":true,"_vscodeAdditionalCaCerts":"[134 certs]","keepAlive":true,"scheduling":"lifo","timeout":5000,"_agentKey":"mobile.events.d[57 chars]","encoding":"null","keepAliveInitialDelay":1000}]
```
### Additional Notes
- Network configuration remains the same across tests on both the login node and the computing node. Firewall are disable for all nodes.
- The system administrator has confirmed that no OS-level rules (e.g., process termination) are in effect on the login node.
- All of the hardware been exactly the same for the login node and the computing nodes in our cluster.
### Key questions
- **Why it works on VS Code 1.93.0 but not on VS Code 1.95.3 ?**
- Why VS Code 1.95.3 works on our computing node but not on the login node?
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Remote OS version: Linux x64 5.15.0-78-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 x 2808)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.89GB (1.49GB free)|
|Process Argv|--log info --crash-reporter-id edf07c68-6f96-49c4-a777-c6a188cd4542|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: Spock|
|OS|Linux x64 5.15.0-78-generic|
|CPUs|AMD Ryzen Threadripper PRO 5975WX 32-Cores (64 x 1793)|
|Memory (System)|251.70GB (244.89GB free)|
|VM|0%|
</details><details><summary>Extensions (7)</summary>
Extension|Author (truncated)|Version
---|---|---
copilot|Git|1.245.0
copilot-chat|Git|0.22.4
jupyter-keymap|ms-|1.1.2
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
remote-explorer|ms-|0.4.3
vscode-speech|ms-|0.12.1
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
g316j359:31013175
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | upstream,remote,nodejs,mitigated | medium | Critical |
2,679,534,045 | PowerToys | Magnifier | ### Description of the new feature / enhancement
WIth magnifier, users can view tiny text written on image files in a pdf.
### Scenario when this would be used?
There's a text on an image but i can't zoom into the image because it is in a pdf file, if you zoom in on the browser, the image doesn't zoom in along, unless i copy the image alone as a jpg file then i can zoom into the image which is stressful.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,679,647,364 | vscode | Terminal suggest: Fork fig into our own repository | We've been experimenting with https://github.com/withfig/autocomplete for a while and the way we're thinking of going forward is to fork the fig autocompletions into our own repo and serve our own npm module. The reasons for this approach are:
- Fig has become not very active since the company behind it was acquired, with the majority of commits being automated bumps to aws specs and CLA signing. No regular accounts got anything merged in the past month despite 10 new PRs coming in:

- The npm module shipped for fig is not very friendly to be consumed by a TS project, it doesn't include the TS typings for example signalling it hasn't had much external use.
- To add proper support for localization which seems like it would be a very big challenge unless we actually owned the repo.
- To replace emoji icons with a completion kind, it's then up to the client how those kinds map to icons.
- Better support for lazy loading the specs
- To get rid of things we don't want and simplifying the spec as needed (pending further investigation).
- Invest in generation tooling to reduce the maintenance burden.
- The Windows Terminal team has signalled interesting in using/helping to maintain it (cc @zadjii-msft, @DHowett).
We've started this exploration by forking just the [code.ts](https://github.com/microsoft/vscode/blob/9508be851891834c4036da28461824c664dfa2c0/extensions/terminal-suggest/src/completions/code.ts) and [code-insiders.ts](https://github.com/microsoft/vscode/blob/9508be851891834c4036da28461824c664dfa2c0/extensions/terminal-suggest/src/completions/code-insiders.ts) files. These will be used as our test bed for further investigating capabilities, polishing those particular specs as they've been neglected, figuring out how to proceed, etc. | feature-request,terminal-suggest | low | Minor |
2,679,657,754 | pytorch | test_fsdp_tp_integration fails for non-power-of 2 GPUs | ### 🐛 Describe the bug
Running the `test_fsdp_tp_integration` with a number of GPUs that is (likely) not a power of 2 fails with e.g.:
```
torch.testing._internal.common_distributed: [ERROR] File "/dev/shm//pytorch/test/distributed/fsdp/test_fsdp_tp_integration.py", line 157, in _sync_tp_grads
torch.testing._internal.common_distributed: [ERROR] per_param_masks = unsharded_zeros.split(splits)
torch.testing._internal.common_distributed: [ERROR] File "/tmp/easybuild-install/lib/python3.10/site-packages/torch/_tensor.py", line 864, in split
torch.testing._internal.common_distributed: [ERROR] return torch._VF.split_with_sizes(self, split_size, dim)
torch.testing._internal.common_distributed: [ERROR] RuntimeError: split_with_sizes expects split_sizes to sum exactly to 105 (input tensor's size at dimension 0), but got split_sizes=[20, 4, 16, 4, 48, 12]
```
I investigated a bit: The module is the `SimpleModel` with
```
self.net1 = torch.nn.Linear(5, 8)
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(8, 4)
self.net3 = torch.nn.Linear(4, 12)
```
Only `"net1.weight", "net1.bias", "net2.weight"` are sharded leading to a [name-size mapping](https://github.com/pytorch/pytorch/blob/dcf7728fd6c0cee55f060c2afb73c6a9b2176f41/test/distributed/fsdp/test_fsdp_tp_integration.py#L152) of `OrderedDict([('net1.weight', 20), ('net1.bias', 4), ('net2.weight', 16), ('net2.bias', 4), ('net3.weight', 48), ('net3.bias', 12)])` which is the `splits` parameter with a total of 104 elements.
Then we have `unsharded_size = flat_param.numel() * fsdp_world_size = flat_param.numel() * self.world_size // tp_world_size = flat_param.numel() * 6 // 2 = 35 * 3 = 105`:
https://github.com/pytorch/pytorch/blob/dcf7728fd6c0cee55f060c2afb73c6a9b2176f41/test/distributed/fsdp/test_fsdp_tp_integration.py#L154
I imagine there is some rounding going on when distributing the 104 parameters:
- 104 parameters for each of the tensor-parallel shards
- 2 tensor-parallel shards distributed on 6 GPUs -> 1 per 3 GPUs
- 104 / 3 = 34 + 2/3 -> ~35
And the decomposition seems to not take that properly into account.
### Versions
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: 11.7
OS: AlmaLinux release 8.7 (Stone Smilodon) (ppc64le)
GCC version: (GCC) 11.3.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.28
Python version: 3.10.4 (main, Feb 11 2023, 06:30:51) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-425.19.2.el8_7.ppc64le-ppc64le-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
GPU 4: Tesla V100-SXM2-32GB
GPU 5: Tesla V100-SXM2-32GB
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,679,657,839 | react | Bug: Problem with StrictMode in development (useEffect + useRef/Memo). useEffect ignores first value of useRef/Memo and uses second twice | React version 18.3.1
StrictMode renders everything twice. and if the useRef value changes on the second render useEffect uses second version of useRef twice.
That can cause some memory leaks during development and unexpected behavior for useEffect
```ts
export function useTestHook() {
const value = useRef(Math.random());
console.log(value, "Given");
useEffect(() => {
console.log("Used in effect", value);
return () => {
console.log(value, "Cleared");
};
}, []);
}
```
output:
```
Object { current: 0.26757063192266095 } Given
Object { current: 0.3384111276512609 } Given
Used in effect Object { current: 0.3384111276512609 }
Object { current: 0.3384111276512609 } Cleared
Used in effect Object { current: 0.3384111276512609 }
```
the problem with that is the reference to the first object 0.2675... was lost
and the second object 0.3384... was cleared and used again
That breaks the program if the useRef/Memo should return the cleanup callback with the object
e.g
```ts
const [object, cleanup] = useRef(someFunc());
const [state, SetState] = useState(object.Get());
useEffect(() => {
object.OnChanged(SetState);
return cleanup;
}, []);
```
because it's going to do the cleanup for the newest object immediately after connection and loose the refference to object created during the first render | Status: Unconfirmed | medium | Critical |
2,679,662,041 | go | x/mobile: go mobile bind, SIGSYS crash on older Android versions | ### Go version
go version go1.23.2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/root/.cache/go-build'
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/opt/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/opt/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/go_dist/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/go_dist/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/root/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/opt/go/src/github.com/BitBoxSwiss/bitbox-wallet-app/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2246426898=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Built an Android app that includes a go module compiled with
```
gomobile bind -x -a -glflags="-mod=readonly" -ldflags="-s -w" -target android -androidapi 21 .
```
And included it with a build.gradle dependency:
```
dependencies {
[...]
implementation project(path: ':myGoModule')
}
```
using
```
compileSdk 34
minSdkVersion 21
targetSdkVersion 34
```
### What did you see happen?
Since our last upgrade to go1.23, the compiled Android app crashes at startup on older Android versions (tested 9.0 and 10.0, i.e. API 28 and 29 on Android studio emulator) with the following error:
```
2024-11-21 15:13:36.524 17389-17472 GoLog ch.shiftcrypto.bitboxapp.debug E [WARNING:viz_main_impl.cc(85)] VizNullHypothesis is disabled (not a warning)
2024-11-21 15:13:36.780 17389-17472 GoLog ch.shiftcrypto.bitboxapp.debug E [1121/151336.779929:ERROR:elf_dynamic_array_reader.h(64)] tag not found
2024-11-21 15:13:36.796 17389-17470 libc ch.shiftcrypto.bitboxapp.debug A Fatal signal 31 (SIGSYS), code 1 (SYS_SECCOMP) in tid 17470 (bitboxapp.debug), pid 17389 (bitboxapp.debug)
2024-11-21 15:13:36.829 17489-17489 DEBUG pid-17489 A pid: 17389, tid: 17470, name: bitboxapp.debug >>> ch.shiftcrypto.bitboxapp.debug <<<
2024-11-21 15:13:36.829 17489-17489 DEBUG pid-17489 A #00 pc 00124bee /data/app/ch.shiftcrypto.bitboxapp.debug-E5gwW0mWuw-zSOShuc4uzA==/lib/x86/libgojni.so
```
Everything works fine for newer Android versions (e.g. 13.0).
Upgrading to Go 1.23.3 doesn't solve the issue, but reverting to Go 1.22.4 does, on all the Android versions tested.
### What did you expect to see?
The app should not crash on startup. | NeedsInvestigation,mobile | low | Critical |
2,679,663,243 | deno | Add support to test remote scripts, similar to "deno run https://examples.deno.land/hello-world.ts", but "deno test". | It should be really nice to run remote tests, like: `deno test -A https://example.com/my_test.ts`
Could this be added to Deno?
Tanks. | suggestion,help wanted,testing | low | Minor |
2,679,682,334 | flutter | [google_maps_flutter_web] Ability to animate camera with duration | ### Use case
User should be able to configure camera update duration on Web platform.
PR https://github.com/flutter/packages/pull/7648 adds support to control camera animation duration to google_maps_flutter_platform_interface and adds implementation for Android and iOS.
Related to #39810
### Proposal
Maps Javascript API does not support setting animation duration for camera controls by default.
Maps Javascript API -documentation has example how to implement custom animation duration with Tween.js library.
https://developers.google.com/maps/documentation/javascript/examples/move-camera-ease
Similar behaviour could be gained using TickerProvider and stepping camera position between start (current) and end (target) position. | c: new feature,p: maps,platform-web,package,c: proposal,P1,team-web,triaged-web | medium | Minor |
2,679,786,009 | flutter | [go_router_builder] while canPop hangs in infinite loop starting from v14.0.0 | ### Steps to reproduce
Hi @ValentinVignal
Unfortunately I do not have repro steps.
I tried to reproduce but failed to create small demo.
2 things I know for sure
1. The bug happens starting exactly from v14.
2. If I comment this line in `route_data.dart` the bug disappears.
```
return GoRoute(
path: path,
name: name,
builder: builder,
pageBuilder: pageBuilder,
redirect: redirect,
routes: routes,
parentNavigatorKey: parentNavigatorKey,
// onExit: onExit,
);
```
### Expected results
No hang
### Actual results
App freezes
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| platform-web,package,has reproducible steps,P1,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | medium | Critical |
2,679,822,680 | pytorch | Updating from torch 2.0 to 2.1/2.2/2.3/2.4/2.5 results in gradient explosion with SDPA and bf16 | ### 🐛 Describe the bug
We are training a huggingface transformer using pytorch native SDPA and the huggingface Trainer. We recently upgraded from torch 2.0, and found that **this change alone** caused our training runs to diverge.
We cannot upload any images and unfortunately are unable to share code since it is internal/proprietary. The training loss is almost identical across torch versions for over half the run and then they diverge. The diverges look similar to this issue https://github.com/pytorch/pytorch/issues/139298, however grad_norm can sometimes get up to values near 100.
Here are some comparisons before and after different changes (all with grad clipping and max grad norm of 1.0):
Torch 2.0 + bf16 + SDPA + lr=5e-5 + gradient accum = 2--> clean loss curve :white_check_mark:
Torch **2.1/2.2/2.3/2.4/2.5** + bf16 + SDPA + lr=5e-5 + gradient accum = 2 --> exploding gradients :x:
Torch 2.4 + bf16 + SDPA + lr=5e-5 + **gradient accum = 1** --> exploding gradients :x:
Torch 2.4 + **fp16** + SDPA + lr=5e-5 + gradient accum = 2 --> clean loss curve :white_check_mark:
Torch 2.5 + fp16 + SDPA + **lr=1e-4** + gradient accum = 2 -->exploding gradients :x:
Torch 2.4 + bf16 + No SPDA (eager) + lr=5e-5 + gradient accum = 2 --> clean loss curve :white_check_mark:
We also tried what[ this issue](
https://github.com/pytorch/pytorch/issues/139298) reported as a solution (`torch.backends.cuda.enable_cudnn_sdp(False)`) but this also resulted in exploding gradients.
We are unsure whether this is an issue with SDPA after 2.0 or just finicky pre-training. Is there anything that changed post torch 2.0 that could be related to this issue? Seems odd that upgrading torch alone would cause this shift in behavior. Any help would be appreciated!
### Versions
This is our environment after upgrading to torch 2.4
```
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.118
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0+cu121 pypi_0 pypi
[conda] torchvision 0.19.0+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | triaged,module: sdpa | low | Critical |
2,679,831,941 | godot | Behavior is different for Move, Rotate and Scale when trying to use them on a selection that only includes controls in a container. | ### Tested versions
Godot v4.4.dev4
### System information
Fedora Linux 40 (KDE Plasma) on Wayland - X11 display driver, Multi-window
### Issue description
Behavior is different for Move, Rotate and Scale when trying to use them on a selection that only includes controls in a container.
**Move mode** displays a toast warning. **This feels like the expected behavior.**
**Rotate mode** displays same warning but changes selection so you end up moving things that are where you clicked.
**Scale mode** doesn't display any toast warning and changes selection so you end up moving things that are where you clicked.
Using Select mode with modifiers here:
https://github.com/user-attachments/assets/2d275abf-6629-40ee-bf80-3a0ca1c74b5c
### Steps to reproduce
^
### Minimal reproduction project (MRP)
[toast_and_spam (1).zip](https://github.com/user-attachments/files/17847722/toast_and_spam.1.zip)
| bug,topic:editor,topic:gui | low | Minor |
2,679,834,686 | PowerToys | Calculator stopped supporting scientific notation as lower case (2e9 vs 2E9) | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store, PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Scientific notation only works with uppercase E now. Lowercase used to work.
I could swear "1e9/20" would give me 50000000 until a few months ago, but now it offers to open a URL (https://1e9/20).
**Steps To Reproduce**
Alt-tab
Type "1e9/20" or "=1e9/20"
### ✔️ Expected Behavior
50000000
### ❌ Actual Behavior
**Screenshot**

### Other Software
_No response_ | Issue-Bug,Status-In progress,Run-Plugin | low | Minor |
2,679,837,164 | vscode | Task API for `ShellExecutionOptions.env` type does not support `undefined` values |
Type: <b>Bug</b>
The `TerminalOptions.env` API supports environment variable values that include both `string | undefined`.
The ask is for this to match the TerminalOptions API.
Usage:
We plan on using this in the Python extension where users can contribute environment variables via `.env` files. We want to make sure the experience is same for the env variable merge in both cases.
VS Code version: Code - Insiders 1.96.0-insider (69acde7458f428f0e6869de8915c9dd995cdda1a, 2024-11-21T05:04:38.064Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz (8 x 1498)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.60GB (12.74GB free)|
|Process Argv|--log trace --log ms-python.python=info --crash-reporter-id 4fb1ebc1-cf4c-4880-a88a-47738ec3768d|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (18)</summary>
Extension|Author (truncated)|Version
---|---|---
tsl-problem-matcher|amo|0.6.2
ruff|cha|2024.54.0
esbuild-problem-matchers|con|0.0.3
vscode-eslint|dba|3.0.10
gitlens|eam|16.0.2
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
copilot|Git|1.245.1221
copilot-chat|Git|0.23.2024112103
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.101.2024112104
debugpy|ms-|2024.13.2024111901
python|ms-|2024.21.0-dev
vscode-pylance|ms-|2024.11.101
remote-containers|ms-|0.388.0
remote-wsl|ms-|0.88.5
extension-test-runner|ms-|0.0.12
code-spell-checker|str|4.0.21
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vsc_aacf:30263846
pythonvspyt551:31179976
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
vscrpc:30624061
a9j8j154:30646983
962ge761:30841072
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
0ee40948:31013168
dvdeprecation:31040973
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
notreesitter:31116712
e80f6927:31120813
i21gd607:31141543
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
5b1c1929:31184661
```
</details>
<!-- generated by issue reporter --> | bug,api,tasks | low | Critical |
2,679,856,276 | flutter | Unable to change Radio() widget fillColor with ThemeData.radioTheme | ### Steps to reproduce
One cannot define a Theme for a Radio button widget which changes the `fillColor` :
```dart
Theme(
data: ThemeData(
radioTheme: RadioThemeData(
fillColor: MaterialStateProperty.resolveWith<Color>((Set<MaterialState> states) {
if (states.contains(MaterialState.disabled)) {
return Colors.blueAccent.withOpacity(.32);
}
return Colors.blueAccent;
}),
),
),
child: Radio(
value: deliveryOptions[0],
groupValue: currentDeliveryMethod,
onChanged: _radioChanged,
),
)
```
I believe it's because of the way Radio() widget is defined in
`./packages/flutter/lib/src/material/radio.dart`
especially:
line: 485
```dart
final Color? activeColor = widget.fillColor?.resolve(activeStates)
?? _widgetFillColor.resolve(activeStates)
?? radioTheme.fillColor?.resolve(activeStates);
```
where `_widgetFillColor` is always defined to be:
```dart
MaterialStateProperty<Color?> get _widgetFillColor {
return MaterialStateProperty.resolveWith((Set<MaterialState> states) {
if (states.contains(MaterialState.disabled)) {
return null;
}
if (states.contains(MaterialState.selected)) {
return widget.activeColor;
}
return null;
});
}
```
a hotfix for that would be to define it directly in the Radio widget:
```dart
Radio(
fillColor: MaterialStateProperty.resolveWith<Color>((Set<MaterialState> states) {
if (states.contains(MaterialState.disabled)) {
return Colors.blueAccent.withOpacity(.32);
}
return Colors.blueAccent;
}),
value: deliveryOptions[0],
groupValue: currentDeliveryMethod,
onChanged: _radioChanged,
),
```
### Expected results
Expected result would be to respect context.theme.themeData.radioTheme.fillColor data in Radio() widget
```dart
Theme(
data: ThemeData(
radioTheme: RadioThemeData(
fillColor: MaterialStateProperty.resolveWith<Color>((Set<MaterialState> states) {
if (states.contains(MaterialState.disabled)) {
return Colors.blueAccent.withOpacity(.32);
}
return Colors.blueAccent;
}),
),
),
child: Radio(
value: deliveryOptions[0],
groupValue: currentDeliveryMethod,
onChanged: _radioChanged,
),
)
```
### Actual results
Radio() widget ignores context.theme.themeData.radioTheme.fillColor and is always the default color unless one overrides fillColor directly in the Widget
### Code sample
<details open><summary>Code sample</summary>
```dart
Theme(
data: ThemeData(
radioTheme: RadioThemeData(
fillColor: MaterialStateProperty.resolveWith<Color>((Set<MaterialState> states) {
if (states.contains(MaterialState.disabled)) {
return Colors.blueAccent.withOpacity(.32);
}
return Colors.blueAccent;
}),
),
),
child: Radio(
value: deliveryOptions[0],
groupValue: currentDeliveryMethod,
onChanged: _radioChanged,
),
)
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.5 23F79 darwin-arm64, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.3)
[✓] Android Studio (version 2024.2)
[✓] Android Studio (version 2023.1)
[✓] Android Studio (version 2022.3)
[✓] IntelliJ IDEA Community Edition (version 2023.2)
[✓] VS Code (version 1.89.0)
[✓] Connected device (4 available)
[✓] Network resources
```
</details>
| framework,f: material design,good first issue,P2,team-design,triaged-design,f: theming | low | Major |
2,679,869,416 | pytorch | Draft mode export coloring is unreadable on white terminal | ### 🐛 Describe the bug
Sample:
<img width="1420" alt="image" src="https://github.com/user-attachments/assets/1a1c0654-5b08-4298-a4c3-3d7b1af76f90">
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,679,877,208 | pytorch | ExportedModule default print of graph signature is unreadable | ### 🐛 Describe the bug
It looks like this:
```
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_pos_embed_height'), target='pos_embed_height',
persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_pos_embed_width'), target='pos_embed_width', persistent=None), InputSpec(kind=
<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_attn_pool_queries'), target='attn_pool_queries', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=
TensorArgument(name='p_to_patch_embedding_0_gamma'), target='to_patch_embedding.0.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgume
nt(name='p_to_patch_embedding_1_weight'), target='to_patch_embedding.1.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p
_to_patch_embedding_1_bias'), target='to_patch_embedding.1.bias', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_to_patch_embe
dding_2_gamma'), target='to_patch_embedding.2.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transformer_layers_0_0_no
rm_gamma'), target='transformer.layers.0.0.norm.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transformer_layers_0_0_
q_norm_gamma'), target='transformer.layers.0.0.q_norm.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transformer_layer
s_0_0_k_norm_gamma'), target='transformer.layers.0.0.k_norm.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transformer
_layers_0_0_to_q_weight'), target='transformer.layers.0.0.to_q.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transfo
rmer_layers_0_0_to_kv_weight'), target='transformer.layers.0.0.to_kv.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_t
ransformer_layers_0_0_to_out_0_weight'), target='transformer.layers.0.0.to_out.0.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgume
nt(name='p_transformer_layers_0_1_0_gamma'), target='transformer.layers.0.1.0.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(n
ame='p_transformer_layers_0_1_1_weight'), target='transformer.layers.0.1.1.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(nam
e='p_transformer_layers_0_1_1_bias'), target='transformer.layers.0.1.1.bias', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_t
ransformer_layers_0_1_4_weight'), target='transformer.layers.0.1.4.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_tra
nsformer_layers_0_1_4_bias'), target='transformer.layers.0.1.4.bias', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transform
er_layers_1_0_norm_gamma'), target='transformer.layers.1.0.norm.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_transfo
rmer_layers_1_0_q_norm_gamma'), target='transformer.layers.1.0.q_norm.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_t
ransformer_layers_1_0_k_norm_gamma'), target='transformer.layers.1.0.k_norm.gamma', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(nam
e='p_transformer_layers_1_0_to_q_weight'), target='transformer.layers.1.0.to_q.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>,
```
There's no way anyone is going to understand this. Remove it from the default print or design a properly indented / more compact representation for this.
As reference, I recently designed a compact format for automatic dynamic shapes decisions that I think is very readable:
```
L['previous_seq_summarizations'][11]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][10]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][9]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][8]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][7]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][6]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][5]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][4]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][3]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][2]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][1]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['previous_seq_summarizations'][0]: tensor size=[128, 64, 192] stride=[S(1), S(2), 1]
L['non_seq_embs']: tensor size=[128, 385, 192] stride=[S(1), S(2), 1]
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,679,907,650 | vscode | Feature Request: Ability to Set Different Zoom Levels for Each Editor Group or Window | Hello,
I’d like to request a feature that allows users to set different zoom levels (or scaling) for each editor group or window in VS Code. Currently, the zoom level applies globally to the entire editor, which can be limiting when working with split editors or multiple groups, especially for users with different visual needs for different file types or layouts.
Proposed Feature:
Allow individual zoom levels for each editor group in split editor mode.
Add a setting (or command) to adjust zoom levels for specific groups or tabs independently.
(Optional) Make this configurable via settings.json for persistent customization.
Use Case: This feature would benefit developers working with different file types (e.g., Python and HTML) or those who need enhanced visibility for certain sections of their workspace. It would also provide a better experience for users who frequently work with split editors and need tailored visual preferences for each section.
Thank you for considering this request!
Best regards,
[Sahib Alizada]
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
| feature-request,zoom,workbench-editors | medium | Critical |
2,680,023,700 | terminal | docker-desktop added to the drop-down menu | ### Windows Terminal version
1.22.3232
### Windows build number
10.0.19045.0
### Other Software
_No response_
### Steps to reproduce
- check menu
### Expected Behavior
- `docker-desktop` not added to the menu
### Actual Behavior
- `docker-desktop` in the drop-down menu | Product-WSL,Resolution-External,Needs-Tag-Fix | low | Minor |
2,680,037,314 | vscode | Terminal suggest: The `.` builtin should explain what it is in the description | When we show `.` as a command like this:

We should explain that it's an alias for `source`. This is a little nicety to make using the terminal a little easier | bug,terminal-suggest | low | Minor |
2,680,041,900 | next.js | Renaming the distDir and using "output: export" seems to render "npm run start" unusable | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/npm-run-start-broken-t9z2fd
### To Reproduce
1. I start clean by deleting both the `.next` and `build` folders if present.
2. Run the "build" task; this runs `npm run build` which runs `next build`
3. `.next` and `build` folders are created successfully. Run the "prod" task; this runs `npm run start` which runs `next start`
4. Error propagates
### Current vs. Expected behavior
I am trying to test my production build locally. We are using static exports for our application and we have renamed the typical "out" folder to "build" in `next.config.mjs`. Everything builds successfully (and I have deployed it on production just fine), but running `npm run start` (which is really `next start`) cannot seem to find the production build.
I get this error:
```
Error: Could not find a production build in the 'build' directory.
Try building your app with 'next build' before starting the production server.
https://nextjs.org/docs/messages/production-start-no-build-id
```
This is my `next.config.mjs` file:
```
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'export',
distDir: 'build',
reactStrictMode: false,
images: {
unoptimized: true,
},
}
export default nextConfig
```
The `.next` folder seems to contain the `BUILD_ID` but the `build` folder, which contains all the static exports, does not have a `BUILD_ID` and I think that is what `npm run start` is looking for based on the URL the error message points to. Any suggestions on how to run the production build with `npm run start`?
I understand my environment info below dictates things are out of date, but my codesandbox above should be using the latest NextJS and it still doesn't work there.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Thu Nov 7 15:41:49 EST 2024
Available memory (MB): 15736
Available CPU cores: 4
Binaries:
Node: 20.5.1
npm: 9.8.1
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.16 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: 14.2.16
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.3
Next.js Config:
output: export
⚠ An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
```
### Which area(s) are affected? (Select all that apply)
Developer Experience
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
_No response_ | bug | low | Critical |
2,680,070,715 | pytorch | Need to also assert on replacements for unbacked symbols | ### 🐛 Describe the bug
```
for i0 in new_unbacked_defs:
ras = self.ras_by_symbol.pop(i0, [])
# NB: size-like not needed, we won't retrace
vr = shape_env.var_to_range[i0]
if not shape_env._default_unspecified_value_range().issubset(vr):
def is_convertible(s: Expr) -> bool:
if s in (int_oo, -int_oo):
return False
try:
int(s)
return True
except TypeError:
return False
if is_convertible(vr.lower):
make_assert(i0 >= vr.lower, f"{i0} >= {vr.lower}")
if is_convertible(vr.upper):
make_assert(i0 <= vr.upper, f"{i0} <= {vr.upper}")
for ra in ras:
fvs = free_unbacked_symbols(ra.expr)
missing = fvs - self.bound_unbacked_symbols
if missing:
i1 = min(missing, key=str)
self.ras_by_symbol.setdefault(i1, []).append(ra)
else:
make_assert(ra.expr, f"{ra.expr}")
```
This does value range and deferred runtime asserts, but replacement asserts are missing.
### Versions
main
cc @chauhang @penguinwu @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,680,074,261 | PowerToys | Make MouseWithoutBorders connections dialog useful | ### Description of the new feature / enhancement
Currently the connections dialog is just a few boxes with seemingly random colour borders.
When it doesn't connect, the colours change - that tells me nothing.
Give me an error message that I can do something about.
Make the "Refresh Connections" button actually connect to the other machines.
### Scenario when this would be used?
At least once per day when MWB fails to connect
### Supporting information
Ok, this is a bit tongue in cheek but for real - connection problems never used to be a thing before it was picked up and added to powertoys
The connections screen is no use to end users at all - and making it actually useful would be amazing | Needs-Triage | low | Critical |
2,680,093,214 | flutter | RenderParagraph should use an aliased/hard-edge clip rect for overflow removal. | Context: https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/rendering/paragraph.dart#L899-L909
Even with Impeller, a hard edge clip rect is slightly faster than an anti-aliased clip rect. fidelity will still be OK for overflow handling via a hard edge clip.
This should change to:
```dart
context.canvas.clipRect(bounds, doAntiAlias: false);
```
Though I'm not sure what qualifies as a test for this. | P2,team-framework,triaged-framework | low | Minor |
2,680,102,470 | TypeScript | Generic function passed to generic function inferred correctly only with spread | ### 🔎 Search Terms
generic function spread regression inference
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about generics
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241121#code/KYDwDg9gTgLgBDAnmYcByEAmqC8cDecAvgNwBQZ2AxgDYCGUwAkAGYCuAdlTAJYQdwWEYSIgAeACoAaOACVgAZzY14oGMA6YFcAIJQodRABkeAa2BjOpjhADuHAHwyAogDceNOGo1a4jOpj8NIhwVjb2ANoAug4AFGRMPBxgbDAAXHDSCVR0NDQARnRUphmxSSnpmTJgjK4Z8koqMgB0rcDuNBluHgCUcDgOcorKMFJkPfXDKhTU9IysnNx8AoV0a2tQkjINI14g6praegbGZhZhdo5xCeWpGVlMOXmFxaW3ldJwNe2TjTB9AyGfzGEyBIxmwFoDFQ7C4vH4cCeBSKpkk1zgGLg73uY0xX1qGQUMCgSQA5tEQYTiWTouQyKBILBBIt4QJQHQALZgGgWCRxbGZPr4Ch4gD0orgAEkOCxgFBtEgUF8GJzgOooGCVIjoIxuMEEkJRMIyslUjIkS9TD1yGKJQA9AD8Isx4rgADE6B4FRAsTK5QhkKgwCqOWr-TstVQdZCYMFfUTgAFfbL5QgABaoKMcIkGJIwBKrdYME0Vc25ZHFa1kW1wR3OjGugBCyDoCgUZOTco0VGAMhgGeZcOWiPLWO0Uf0MermML602nxzNJiJbNI+eKKr04bEulKYVgeVBlD6s18AnutjiC3cFnGxXozXFdRfKrNbrRDIQA
### 💻 Code
```ts
export type Node = { };
declare function fooooooo<T, Result extends ArrayLike<unknown>, Evil extends readonly unknown[]>(
input: T,
callback: (input: T, prev: Result, ...evil: Evil) => Result,
): Result
declare function baaaaaar<T, Result extends ArrayLike<unknown>>(
input: T,
callback: (input: T, prev: Result) => Result,
): Result
declare function callback<T>(
input: T,
prev: string[],
): string[];
export function example<T>(input: T) {
// Infers type parameter Result correctly
fooooooo(input, callback);
// ^? function fooooooo<T, string[], []>(input: T…
// Fails to infer type parameter Result correctly instead infers the constraint
baaaaaar(input, callback);
// ^? function baaaaaar<T, ArrayLike<unknown>>(in…
// Bypassing inference, the function call is correct
baaaaaar<T, string[]>(input, callback);
// Infers type parameter Result correctly
baaaaaar(input, callback<T>);
// ^? function baaaaaar<T, string[]>(input: T, ca…
}
```
### 🙁 Actual behavior
`baaaaaar(input, callback)` infers the constraint of `Result = ArrayLike<unknown>`
### 🙂 Expected behavior
`baaaaaar(input, callback)` should infer `Result = string[]` from the return type or parameter of `callback`.
### Additional information about the issue
_No response_ | Help Wanted,Possible Improvement | low | Minor |
2,680,115,126 | angular | Cyclic imports of components need better error messages | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
This was already reported before, but not with the intent of making the error message better:
https://github.com/angular/angular/issues/54092
Below, I posted a reproduction that wasn't provided back then.
Note that uncommenting the @defer block will "fix" this.
It's not necessarily trivial to fix these errors. In our case, after migrating from modules to full standalone components, we had such a cycle with a scenario similar to this one:
<user-list> Component renders several <user-name> components.
--> Click <user-name> opens a dialog containing User related data with MatDialog when you click it.
Inside that dialog, we have some sub-component <friend-list> that uses <user-name>, hence the cycle. We had to fix this by importing the dialog only when it's clicked, but finding which of the sub-components was the culprit took quite some time. It would be good if there was at least some indication of what to investigate, since the error message isn't exactly simple.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/8pfsxa?file=src%2Fapp%2Fa.component.ts,src%2Fapp%2Fb.component.ts,src%2Fmain.ts
### Please provide the exception or error you saw
This isn't easy to debug in bigger components as the cycles might be non-obvious.
`forwardRef()` or `defer {}` or delayed imports can all help once you found the culprit, but finding the culprit is painful.
If component A relies on component B, and component B on component A again, we currently get a very cryptic and hard to debug error:
```true
core.mjs:6637 ERROR TypeError: Cannot read properties of undefined (reading 'ɵcmp')
at getComponentDef (core.mjs:1664:10)
at extractDirectiveDef (core.mjs:16449:10)
at core.mjs:16585:96
at Array.map (<anonymous>)
at core.mjs:16585:85
at createTView (core.mjs:12916:59)
at getOrCreateComponentTView (core.mjs:12865:24)
at addComponentLogic (core.mjs:13602:17)
at instantiateAllDirectives (core.mjs:13399:5)
at createDirectivesInstances (core.mjs:12830:3)
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 19.0.0
Node: 20.13.1
Package Manager: npm 9.6.4
OS: win32 x64
Angular: 19.0.0
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.0
@angular-devkit/build-angular 19.0.0
@angular-devkit/core 19.0.0
@angular-devkit/schematics 19.0.0
@schematics/angular 19.0.0
rxjs 7.8.1
typescript 5.5.4
zone.js 0.15.0
```
### Anything else?
It's also interesting to note that WebStorm has a diagnostic for this:
 | area: core | low | Critical |
2,680,283,244 | flutter | [Impeller] Uneven letter spacing when text is scaled | ### Steps to reproduce
Scaling using textScaler parameter in a Text widget looks nice, but a Text widget inside a Transform widget does not.
Tested on macOS.
### Expected results
How it looks in Skia:
<img width="250" alt="textscaling-skia" src="https://github.com/user-attachments/assets/91fd1db3-38d1-4db8-8db2-8b0d0fd65319">
### Actual results
How it looks in Impeller:
<img width="250" alt="textscaling-impeller" src="https://github.com/user-attachments/assets/cc8df5e2-d341-4f97-9eac-f16301e022d3">
Here's a GIF for easier comparison:

(It's also interesting that the font weight appears to be thinner in Impeller, but that's not why I'm here)
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:google_fonts/google_fonts.dart';
void main() async {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MainApp2());
}
class MainApp2 extends StatefulWidget {
const MainApp2({super.key});
@override
State<MainApp2> createState() => _MainApp2State();
}
class _MainApp2State extends State<MainApp2> {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: DefaultTextStyle(
style: TextStyle(fontFamily: GoogleFonts.roboto().fontFamily, fontSize: 12),
child: Column(mainAxisAlignment: MainAxisAlignment.center, children: [
Transform.scale(
scale: 0.7,
child: const Text(
"""With Transform widget:
The quick brown fox jumps over the lazy dog
lllllllllllllllllllllllllllllllllllllllllll
iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii"""
)
),
const Text(
textScaler: TextScaler.linear(0.7),
"""With TextScaler:
The quick brown fox jumps over the lazy dog
lllllllllllllllllllllllllllllllllllllllllll
iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii"""
)
])
)
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.26.0-1.0.pre.367, on macOS 13.6.7 22G720 darwin-arm64, locale en-US)
• Flutter version 3.26.0-1.0.pre.367 on channel master at /Users/weirdhat/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 572b1d7b08 (7 weeks ago), 2024-10-04 22:04:06 -0400
• Engine revision 92b5b31819
• Dart version 3.6.0 (build 3.6.0-324.0.dev)
• DevTools version 2.40.0
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/weirdhat/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[!] Xcode - develop for iOS and macOS (Xcode 15.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A240d
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (4 available)
• JK iPad 2024 (mobile) • 00008103-001659302668801E • ios • iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 13.6.7 22G720 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 13.6.7 22G720 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.71
! Error: Browsing on the local area network for JACOB’s iPad (2). Ensure the device is unlocked and attached with a cable or associated with the
same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| engine,c: rendering,has reproducible steps,P1,e: impeller,team-engine,triaged-engine,found in release: 3.24,found in release: 3.27 | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.