id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,650,065,753 | TypeScript | moduleResolution Bundler and importModuleSpecifierEnding | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.95.1 (Universal)
Commit: 65edc4939843c90c34d61f4ce11704f09d3e5cb6
Date: 2024-10-31T05:14:54.222Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 23.4.0
## My project is Node/Fastify written in typescript.
Steps to Reproduce:
1. Set the compiler compiler options in a tsconfig:
```json
{
"module": "ESNext",
"target": "ESNext",
"moduleResolution": "Bundler"
}
```
The suggested paths to import, for example, are:

2. Change to "moduleResolution": "Node" and then reload VSCode.
The suggested paths to import are now:

I want the QuickFix Suggestions from 2. while using "moduleResolution": "Bundler"
I have these settings:
```json
{
"javascript.preferences.importModuleSpecifierEnding": "minimal",
"javascript.preferences.importModuleSpecifier": "non-relative"
"typescript.preferences.importModuleSpecifierEnding": "minimal",
"typescript.preferences.importModuleSpecifier": "non-relative",
}
``` | Needs Investigation | low | Critical |
2,650,069,613 | kubernetes | Add statusz endpoint for kube-scheduler | ### What would you like to be added?
Part of
- https://github.com/kubernetes/enhancements/issues/4827
Add the `/statuz` endpoint for kube-scheduler.
Sample response:
```
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981+c6be932655a03b-dirty
Emulation version: 1.31.0-beta.0.981
Minimum Compatibility version: 1.30.0
List of useful endpoints
--------------
configz:/configz
healthz:/healthz
livez:/livez
metrics:/metrics
readyz:/readyz
sli metrics:/metrics/slis
```
/sig instrumentation
### Why is this needed?
Refer to [Motivation setcion](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/4827-component-statusz/README.md#motivation) in KEP.
```[tasklist]
### Tasks
```
| kind/feature,sig/instrumentation,needs-triage | low | Major |
2,650,117,494 | opencv | Documentation states that cvtColor() supports CV_16U, but seems to actually only support CV_8U and CV_32F. | ### Describe the doc issue
The cvtColor documentation:
https://docs.opencv.org/4.x/d8/d01/group__imgproc__color__conversions.html#gaf86c09fe702ed037c03c2bc603ceab14
states:
"src input image: 8-bit unsigned, 16-bit unsigned ( CV_16UC... ), or single-precision floating-point. "
but it looks like only CV_8U and CV_32F are actually supported:
```
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.1.1) /home/bbogart/src/opencv-4.1.1/modules/imgproc/src/color.simd_helpers.hpp:94: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3>; VDepth = cv::impl::{anonymous}::**Set<0, 5>**; cv::impl::{anonymous}::SizePolicy sizePolicy = (cv::impl::<unnamed>::SizePolicy)2; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Unsupported depth of input image:
> 'VDepth::contains(depth)'
> where
> **'depth' is 2 (CV_16U)**
```
### Fix suggestion
Strike any references to CV_16U in cvtColor docs.
I don't know when this change happened, but I found it in using OpenCV 4.1.1, so changes should be made retroactively. | category: documentation | low | Critical |
2,650,124,188 | pytorch | asynchronous copies from accelerator to cpu: what should be the expected behaviour? | ### ๐ Describe the bug
## Problem
If you run
```python
import torch
x = torch.arange(1000, device="cuda")
y = torch.arange(1000, device="cuda")
x = x.to("cpu", non_blocking=True)
y = y.to("cpu", non_blocking=True)
# torch.cuda.synchronize()
assert (x == y).all()
```
you'll need to comment out the `torch.cuda.synchronize()` call for this test to pass.
The same goes with tensors on MPS device (ie, replace "cuda" with "mps" in the script).
However, sending cpu tensors to cuda and performing a blocking operation like this will (should?) require a synchronization (see #139550 for context in the mps case):
```python
import torch
x = torch.arange(1000, device="cpu")
y = torch.arange(1000, device="cpu")
x = x.to("cuda", non_blocking=True)
y = y.to("cuda", non_blocking=True)
# no need to sync!
assert (x == y).all()
```
## Solutions
My personal take here is that we should have either that
a. in all cases (cpu -> cuda, cuda -> cpu, cpu -> mps, mps -> cpu, cuda -> cuda...) using `non_blocking` is safe, or
b. it is only safe in the case of <whatever> -> cuda. Then we should document in `torch.Tensor.to` that `non_blocking` may lead to corrupted data if no sync is called.
Now, there is no concept of a _cpu_ stream but we kinda expect that there is only one type of accelerator per machine AFAICT, so there is only one synchronization method to be called per machine (it could even just be `torch.synchronize()` and work across all H2D / D2H transfers?) That doesn't hold if we plan on supporting multiple accelerators / machine ofc.
I can also see that sending to cpu is a bit more special so we could fix the H2D transfers first and then deal with D2H but IMO for a consistent UX we should have the maximum uniformity across async data transfers behaviors.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD @malfet
### Versions
nightlies | module: docs,triaged,module: python frontend | low | Critical |
2,650,153,190 | next.js | Bug/inconsistency with `dynamicIO` and dynamic imports | ### Link to the code that reproduces this issue
https://github.com/amannn/nextjs-bug-repro-json-import-turbo/commit/011dc47eddfb59aaea7e5461dcde85139b908c2c
### To Reproduce
1. Clone https://github.com/amannn/nextjs-bug-repro-json-import-turbo/
2. `pnpm i`
3. `pnpm dev --turbo`
4. See error
### Current vs. Expected behavior
When running with Turbo, this error message is printed:

When running with webpack, there's no error.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.12.3
Relevant Packages:
next: 15.0.4-canary.5 // Latest available version is detected (15.0.4-canary.5).
eslint-config-next: 15.0.4-canary.5
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,linear: next,dynamicIO | low | Critical |
2,650,195,093 | vscode | pytest parametrize test results are hard to read |
Type: <b>Bug</b>
<!-- Please fill in all XXX markers -->
# Behaviour
With parametrized tests, the results are hard to read in the "running tests for ..." right side window of test results.
For non-parametrized tests, the test name is listed.
For parametrized tests, the parameters are listed without the test name.
I think it would be easier to read if it was "test_name[param_value]", similar to how `pytest -v` shows the results.
## Steps to reproduce:
Run a test with parametrizations.
Here's a small example:
```python
import pytest
def test_a_1():
...
def test_a_2():
...
@pytest.mark.parametrize('x', [1, 2])
def test_b(x):
...
```
I'm attaching a screen shot of the output to demo the issue.
In this example I think "test_b[1]" and "test_b[2]" would be easier to read than "[1]" and "[2]".

The problem is more pronounced with a parametrized fixture.
Example:
```python
import pytest
@pytest.fixture(params=[1, 2])
def x(request):
return request.param
def test_a(x):
...
def test_b(x):
...
```
Results in:

<!--
**After** creating the issue on GitHub, you can add screenshots and GIFs of what is happening. Consider tools like https://www.cockos.com/licecap/, https://github.com/phw/peek or https://www.screentogif.com/ for GIF creation.
-->
<!-- **NOTE**: Please do provide logs from Python Output panel. -->
# Diagnostic data
<details>
<summary>Output for <code>Python</code> in the <code>Output</code> panel (<code>View</code>โ<code>Output</code>, change the drop-down the upper-right of the <code>Output</code> panel to <code>Python</code>)
</summary>
<p>
```
XXX
```
</p>
</details>
Extension version: 2024.18.0
VS Code version: Code 1.95.1 (65edc4939843c90c34d61f4ce11704f09d3e5cb6, 2024-10-31T05:14:54.222Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | bug,polish,testing | low | Critical |
2,650,202,502 | kubernetes | Make NoteLengthLimit for events configurable | ### What would you like to be added?
Making [NoteLengthLimit](https://github.com/carlory/kubernetes/blob/8fe10dc378b7cc3b077b83aef86622e1019302d5/pkg/apis/core/validation/events.go#L38) for events was some configurable value, rather than a hard limit at 1kb so that cluster operators could choose what size to limit event messages to.
### Why is this needed?
More than once I've encountered scheduler events in our environment which contain messages which are significantly longer than the NoteLengthLimit and are thus truncated. I understand that this data could create a significant bloat in the size of events and that a sane default would be preferred, but allowing for cluster admins to configure this value if desired would be great.
I did see that something related was discussed in an issue [here](https://github.com/kubernetes/kubernetes/issues/98175) and a fix was implemented [here](https://github.com/kubernetes/kubernetes/pull/98715) but it looks like the solution was to truncate events rather one of the other options surfaced like a configurable size, or producing multiple events. I understand it was likely the simplest solution to resolve the bug. | sig/scheduling,kind/feature,needs-triage | low | Critical |
2,650,205,188 | godot | When the `ui_text_completion_accept` shortcut is the same as `ui_indent`, `ui_indent` takes precedence | ### Tested versions
- Reproducible in v4.4.dev4.official [36e6207bb]
### System information
Godot v4.4.dev4 - Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3080 - 12th Gen Intel(R) Core(TM) i5-12400F (12 threads)
### Issue description
I prefer the behavior of `ui_text_completion_accept` compared to `ui_text_completion_replace`, so I unbound `replace` and bound `accept` to the tab key. This worked fine on all previous versions, but in 4.4dev4, it started inserting tab characters instead of accepting the completion request.
### Steps to reproduce
Change the `ui_text_completion_accept` shortcut to be the same as `ui_indent` and unbind `ui_text_completion_replace`, then try to accept a completion request
### Minimal reproduction project (MRP)
N/A | discussion,topic:input,topic:gui | low | Minor |
2,650,208,034 | transformers | Changes required to `save_model` for certain models (e.g., Phi 3.5 Vision) | ### Feature request
This request proposes one of three changes (see **Motivation** for background, and **Your contribution** more thoughts on possible solutions) in order to allow saving of a certain class of models, including but not limited to Phi 3.5 Vision.
1. Accept a `state_dict` argument in the `Trainer` class's `save_model()` method (https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3719-L3768). This `state_dict` parameter should then be passed down to the call to the private `_save()` method (https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3842), which _does_ accept a `state_dict` argument.
2. Rather than`state_dict` as an argument to `save_model()`, determine the appropriate heuristic such that we can successfully save Phi 3.5 Vision and other architecturally similar models.
3. Some change to the way `transformers` handles shared tensors...?
### Motivation
I encountered an issue while trying to fine-tune Phi 3.5 Vision using the `Trainer` class from `transformers`. In particular, when trying to call `save()` or `save_pretrained()`, transformers throws the following error:
```
RuntimeError: The weights trying to be saved contained shared tensors [{'model.vision_embed_tokens.wte.weight',
'model.embed_tokens.weight'}] that are mismatching the transformers base configuration.
Try saving using `safe_serialization=False` or remove this tensor sharing.
```
Below are two minimal reproducible examples:
_Example #1_
```
from transformers import AutoModelForCausalLM
model_id = "microsoft/Phi-3.5-vision-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto"
)
model.save_pretrained("out")
```
_Example #2_
```
from transformers import (
Trainer,
TrainingArguments,
)
training_args = TrainingArguments(
save_only_model=True,
output_dir='./out/',
save_strategy='no',
)
trainer = Trainer(
model=model,
args=training_args
)
trainer.save_model()
```
It looks like others have also encountered this issue. See the list of reference issues below in "Issues".
A contributor to the Phi 3 Vision cookbook suggested the following solution, stating "You need to remove the wte weight. It's okay because when the model is loaded from the checkpoint, it will automatically copy the weight from the embedding weight."
```
state_dict = model.state_dict()
state_dict = {k:v for k, v in state_dict.items() if "wte" not in k}
model.save_pretrained(args.save_model_path, state_dict=state_dict, safe_serialization=True)
processor.save_pretrained(args.save_model_path)
```
This does indeed seem to work. However, it doesn't exactly fit into a use case that relies on the `Trainer` abstraction. The call to the `Trainer` class's `save_model()` method doesn't accommodate a state_dict argument (see https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3719-L3768).
**Issues**
1. https://github.com/kazuar/Phi3-Vision-ft/issues/2
2. https://discuss.huggingface.co/t/runtimeerror-when-saving-phi-3-5-vision-due-to-shared-tensors/116457
4. https://github.com/huggingface/transformers/issues/32354
5. https://discuss.huggingface.co/t/using-trainer-to-save-a-bartforsequenceclassification-model/81606
### Your contribution
I'd be glad to submit a PR, but I think some discussion is needed from the appropriate `transformers` stakeholders.
It's not clear to me whether the most appropriate change here is to modify the function signature.
Alternatively, maybe there's a heuristic by which we could determine whether the architecture is such that one needs to save everything but the `wte` weights. I don't know the answer to that off-hand. It may require a deep dive from Phi 3/3.5 Vision SMEs.
Or more broadly, perhaps there's some change to the way `transformers` handles shared tensors in the base configuration that would be most appropriate. | trainer,Feature request,Safetensors | low | Critical |
2,650,213,379 | PowerToys | Sometimes Mouse Jump didn't claim the focus and pressing numbers won't work (with some other suggs in issue) | ### Microsoft PowerToys version
0.86.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
- focus on any window
- call the function
- try to press number (only tested happening on main area's number row but the numkeys on the right since my current keyboard doesn't have it)
this tested happening with 98% possibility on my machine
### โ๏ธ Expected Behavior
press on the number will jump the mouse to the target in time
btw:
currently the UI of jumper is kinda mess (apologies, but that's how I thought)
my suggestion is try to make it as the `alt + tab` switching of window, and make each screens' thumbnail equal in size (I don't think that mocking the position helps, unless you got more than 3 screens.) And I do think that marking the number (first, second... sort) on the thumbnail is better (for those who have less screens, maybe better)
Meanwhile, the UI alignment on the center (rather than following where mouse located) is also a option that do make sense.
If you think your solution of UI made its sense, then please consider my suggestions and give us something to choose, thanks.
#35859
### โ Actual Behavior
no reaction.
### Other Software
any kind of virtual screen app (parsecVDD, IDD...) regardless o the version | Issue-Bug,Needs-Triage | low | Minor |
2,650,220,218 | tauri | [feat] Expose raw window procedure messages on Windows | ### Describe the problem
On Windows, there isn't a way to inspect raw incoming wndproc messages. This is useful in a lot of advanced Win32 API use cases, and is necessary for e.g. the [appbar APIs](https://learn.microsoft.com/en-us/windows/win32/shell/application-desktop-toolbars). An appbar is a special window that reserves a portion of the monitor screen space, like the native Windows taskbar. To be properly registered as an appbar, there are several window messages that need to be responded to in a certain way - [example of appbar wndproc](https://github.com/mgaffigan/WpfAppBar/blob/master/WpfAppBar/AppBarWindow.cs#L269).
This would also avoid the need for spawning supplementary message windows (e.g. for listening to window events, hardware/device events, etc.).
### Describe the solution you'd like
Perhaps it'd be feasible to expose a new enum member `RunEvent::WindowEvent::Raw(u32, usize, isize)` (corresponding to `msg, wParam, lParam`) that could then be used with `run_iteration`.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Major |
2,650,222,087 | pytorch | [RFC] Distributed manual_seed API for N-D parallelism | ### Background
In distributed training scenarios, RNG initialization matters for ensuring a correct model initialization, and in some cases also controlling random ops during training (e.g. dropout). For model initialization, the objective is to ensure that when different ranks initialize different shards of a layer (TP, FSDP) or different layers (PP), they get different values, and when different ranks initialize replicas of the same layer or the same shard (DDP or HSDP), they get the same value.
### DTensor RNG Tracker
Currently, DTensor supports RNG management. There are some issues with this.
- It does not account for pipeline parallelism, and needs to be extended
- the design goals are over-ambitious (trying to enable matching the behavior with single-gpu models when this is not generally possible), leading to overly complex implementation
- the user-set RNG seeds are not (always) respected, and may be overwritten/ignored, depending on which RNG Tracker implementation DTensor selects (which is not user-visible, but depends on the degrees of parallelism used)
### Proposed API
`torch.distributed.manual_seed(seed, sharded_groups=[pp, tp, fsdp])`
An api like this could solve several needs:
- initialize ranks in 'sharded_groups' with different seeds by adding different offset values to the user-provided seed per rank, while seeding ranks of other groups with the same seed (to handle replica cases)
- additionally, `sharded_dims` arg would be accepted for device-mesh users, only one used at a atime
- order of sharded groups specifies the order of incrementing the seed-offset, giving control to the user, and enabling things like trying to match the seeding behavior of megatron or some existing trainer
To enable this, we should change a few things about DTensor
- avoid overwriting user-provided seed. Require the user to set the seed, but don't do it automatically
- don't broadcast the seed. This can lead to collective hangs in some cases or just unexpected / wrong seeding behavior, since DTensor broadcasts to the whole world rather than knowing which groups are sharded.
- Use one RNG tracker consistently. Avoid switching it depending on the parallelism. Rely on this new API for goals like matching megatron RNG behavior.
For the 'ordering' of seed offsets, this is what I had in mind
<details><summary>ordering of sharded groups</summary>
```
8 gpu: DDP, PP, TP,
DDP 0
PP0 PP1
TP0 TP1 TP0 TP1
[0, 1], [2, 3],
DDP 1
[4, 5], [6, 7]
manual_seed(123, sharded_groups=[PP, TP])
Rank Seed forumula= 123 + (pp_rank * tp_size=2 + tp_rank)
0 123 + 0 0 0
1 123 + 1 0 1
2 123 + 2 1 0
3 123 + 3 1 1
4 123 + 0
5 123 + 1
6 123 + 2
7 123 + 3
manual_seed(123, sharded_groups=[TP, PP])
Rank Seed forumula= 123 + (tp_rank * pp_size=2 + pp_rank)
0 123 + 0 0 0
1 123 + 2 1 0
2 123 + 1 0 1
3 123 + 3 1 1
4 123 + 0
5 123 + 2
6 123 + 1
7 123 + 3
```
</details>
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | oncall: distributed,triaged,module: c10d | low | Major |
2,650,278,861 | PowerToys | FanzyZones Issues with Multiple Adobe Apps | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
This has been an ongoing issue with FancyZones and Adobe applications for a very long time.
Specifically:
Adobe Illustrator:
Problem: Project files will not display on initial launch of Illustrator if Illustrator is snapped to a zone.
Steps to reproduce:
Open Adobe Illustrator. Snap to a zone. Click on a recently opened file in the inital Illustrator welcome screen. File will attempt to open, but it will not display the file contents in Illustrator (work canvas will appear to be blank). If multiple attempts to open the file are performed, usually the file will eventually open.
Adobe Audition:
Problem: Certain UI interfaces will not open if Audition is snapped to a zone
Steps to reproduce:
Open Adobe Audition. Snap to a zone. Open an audio file. Go to Effects>Time and Pitch>Stretch and Pitch. UI interface for Stretch and Pitch will open blacked-out and unsuable (NOTE: Sometimes UI panels will open on first try while window is snapped--- this is very intermittent, and it will eventually go back to opening the panels as black boxes if repeated multiple times). Unsnap from zone, repeat. UI interface will now open.
### โ๏ธ Expected Behavior
Illustrator: Expected behavior is for projects to load in Illustrator when opened.
Audition: Expected behavior is for all UI interfaces to open when selected.
### โ Actual Behavior
Illustrator: Project files will not load if Illustrator is snapped to a zone on initial Illustrator launch
Audition: Some UI interfaces will not open if Audition is snapped to a zone.
### Other Software
Illustrator 2024, 2025
Audition 2024, 2025 | Issue-Bug,Needs-Triage | low | Minor |
2,650,299,146 | next.js | Failed to parse source map: TypeError: Cannot read properties of undefined (reading 'bold') for deno | ### Link to the code that reproduces this issue
https://github.com/howdoicomputer/issue-reproduction
### To Reproduce
1. Create a new project and add
```
import OpenAI from 'openai';
const client = new OpenAI();
```
To the top of `app/page.tsx`
2. `deno run dev`
3. Watch app crash when the index page is loaded
### Current vs. Expected behavior
You can create an openai client without the entire thing falling apart.
### Provide environment information
```bash
Platform: darwin
Arch: arm64
Version: 24.1.0
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.11.1
npm: 10.8.2
Yarn: N/A
โฏ deno --version
deno 2.0.6 (stable, release, aarch64-apple-darwin)
v8 12.9.202.13-rusty
typescript 5.6.2
```
```
โฏ cat package.json
{
"name": "polarstomps",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"openai": "^4.71.1",
"react": "19.0.0-rc-66855b96-20241106",
"react-dom": "19.0.0-rc-66855b96-20241106",
"next": "^15.0.3",
"sharp": "^0.33.5",
"zod": "^3.23.8",
"chalk": "^5.3.0"
},
"devDependencies": {
"typescript": "^5",
"@types/node": "^20",
"@types/react": "^18",
"@types/react-dom": "^18",
"postcss": "^8",
"tailwindcss": "^3.4.1",
"eslint": "^8",
"eslint-config-next": "15.0.3"
},
"overrides": {
"chalk": "^5.3.0"
}
}
```
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I've been having this error pop up quite a bit that has been hard for me to track down.
Full stack trace:
```
Failed to parse source map: TypeError: Cannot read properties of undefined (reading 'bold')
at getDefs (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/compiled/babel/bundle.js:1848:5496)
at highlight (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/compiled/babel/bundle.js:1848:6631)
at codeFrameColumns (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/compiled/babel/bundle.js:1:77498)
at getOriginalCodeFrame (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/client/components/react-dev-overlay/server/shared.js:70:44)
at createOriginalStackFrame (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/client/components/react-dev-overlay/server/middleware.js:146:61)
at eventLoopTick (ext:core/01_core.js:175:7)
at async (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/client/components/react-dev-overlay/server/middleware.js:286:52)
at async HotReloaderWebpack.run (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/dev/hot-reloader-webpack.js:317:13)
at async handleRequest (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/lib/router-server.js:214:43)
at async requestHandlerImpl (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/lib/router-server.js:384:13)
at async ServerImpl.requestListener (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/lib/start-server.js:142:13)
Failed to parse source map: TypeError: Cannot read properties of undefined (reading 'bold')
at getDefs (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/compiled/babel/bundle.js:1848:5496)
at highlight (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/compiled/babel/bundle.js:1848:6631)
at codeFrameColumns (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/compiled/babel/bundle.js:1:77498)
at getOriginalCodeFrame (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/client/components/react-dev-overlay/server/shared.js:70:44)
at createOriginalStackFrame (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/client/components/react-dev-overlay/server/middleware.js:146:61)
at eventLoopTick (ext:core/01_core.js:175:7)
at async (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/client/components/react-dev-overlay/server/middleware.js:286:52)
at async HotReloaderWebpack.run (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/dev/hot-reloader-webpack.js:317:13)
at async handleRequest (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/lib/router-server.js:214:43)
at async requestHandlerImpl (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/lib/router-server.js:384:13)
at async ServerImpl.requestListener (/Users/tylerhampton/workspace/polarstomps/node_modules/.deno/next@15.0.3/node_modules/next/dist/server/lib/start-server.js:142:13)
```
I initially had this problem when I was trying to create an API route that wrapped some openai calls. Whenever I would instantiate a new openai client I would get a stacktrace that led with `Failed to parse source map: TypeError: Cannot read properties of undefined (reading 'bold')`. I ended up moving that functionality to a dedicated FastAPI backend but now I'm getting that error again and I'm not sure what the origin is now.
The only Google reference I have for that error points to chalk. Really confusing and hard to debug. | bug,linear: next | low | Critical |
2,650,355,573 | animate.css | [Docs] Svelte-Animate | ### Is your feature request related to a problem? Please describe.
I just created [Svelte-Animate](https://svelte-animate.codewithshin.com/) based on animate.css. It will be greate if you can mention about it.
### Describe the solution you'd like.
Svelte-Animate on the doc page.
### Describe alternatives you've considered.
Svelte-Animate on the README.md
### Additional Context
Thank you for the library. | feature request | low | Minor |
2,650,368,717 | three.js | Haptics example crashes after switching from hands to controllers (or vice verse) | ### Description
While investigating a separate issue (#29861), I found that the haptics example crashes after switching between hands and controllers followed by trying to touch one of the music notes. Error log below
```
webxr_xr_haptics.html:250 Uncaught TypeError: Cannot read properties of undefined (reading 'frequency')
at handleCollisions (webxr_xr_haptics.html:250:25)
at animate (webxr_xr_haptics.html:299:5)
at onAnimationFrame (three.module.js:27918:36)
at XRSession.onAnimationFrame (three.module.js:13614:3)
```
**Cause of error**:
Only 2 oscillators are created and added to the oscillators list, but more than 2 controllers can be added to the controllers list and the same index is used to reference both lists
**Proposed solution**:
Extract `createOscillator()` out from `initAudio()` and use it to create a new oscillator whenever one is not found for the given controller. Increases complexity of code a tiny bit, but not in any way that makes it harder to understand
**Alternative solution**:
Add `oscillators.push( createOscillator() );` 2 more times. Note that things will still break if there are ever more than 4 input sources
More than happy to open a pull request with the changes
### Reproduction steps
1. Go to the haptics example with an XR device
2. Click the Enter XR button using hand tracking
3. Pick up your controllers
4. Try to touch one of the boxes
### Code
```js
// Proposed Solution
// Replaces https://github.com/mrdoob/three.js/blob/b276fb5669193b91a9fd5df2330ed6d76924ccfd/examples/webxr_xr_haptics.html#L47-L70
function initAudio() {
if ( audioCtx !== null ) {
return;
}
audioCtx = new ( window.AudioContext || window.webkitAudioContext )();
}
function createOscillator() {
if(!audioCtx) initAudio();
// creates oscillator
const oscillator = audioCtx.createOscillator();
oscillator.type = 'sine'; // possible values: sine, triangle, square
oscillator.start();
return oscillator;
}
// Replaces https://github.com/mrdoob/three.js/blob/b276fb5669193b91a9fd5df2330ed6d76924ccfd/examples/webxr_xr_haptics.html#L250
if(!oscillators[ g ]) oscillators[ g ] = createOscillator();
oscillators[ g ].frequency.value = 110 * Math.pow( 2, musicInterval / 12 );
```
```js
// Alternate Solution
// Replaces https://github.com/mrdoob/three.js/blob/beab9e845f9e5ae11d648f55b24a0e910b56a85a/examples/webxr_xr_haptics.html#L66-L67
oscillators.push( createOscillator() );
oscillators.push( createOscillator() );
oscillators.push( createOscillator() );
oscillators.push( createOscillator() );
```
### Live example
* [Haptics Example](https://threejs.org/examples/?q=haptic#webxr_xr_haptics)
### Screenshots
_No response_
### Version
r170
### Device
Headset
### Browser
Chrome
### OS
Android | WebXR | low | Critical |
2,650,389,266 | flutter | [a11y] When focusing Widget created by showModalBottomSheet, focus is not immediately on contained actions | ### Steps to reproduce
1. Enable voiceover on your platform.
2. Launch a modal bottom sheet via `showModalBottomSheet`.
3. Navigate to it via voiceover.
4. Observe that the dialog itself is focused instead of the contents of the dialog.
### Expected results
The interactive contents of the dialog should be focused first.
### Actual results
Instead of interactive contents being focused, the dialog itself is focused. This is at-odds with behavior seen on most apps (e.g. Google Home app).
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
/// Flutter code sample for [showModalBottomSheet].
void main() => runApp(const BottomSheetApp());
class BottomSheetApp extends StatelessWidget {
const BottomSheetApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Bottom Sheet Sample')),
body: const BottomSheetExample(),
),
);
}
}
class BottomSheetExample extends StatelessWidget {
const BottomSheetExample({super.key});
@override
Widget build(BuildContext context) {
return Center(
child: ElevatedButton(
child: const Text('showModalBottomSheet'),
onPressed: () {
showModalBottomSheet<void>(
context: context,
builder: (BuildContext context) {
return Container(
height: 200,
color: Colors.amber,
child: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
mainAxisSize: MainAxisSize.min,
children: <Widget>[
const Text('Modal BottomSheet'),
ElevatedButton(
child: const Text('Close BottomSheet'),
onPressed: () => Navigator.pop(context),
),
],
),
),
);
},
);
},
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/0353183a-bf53-4353-9ccc-e6a52ef8a9a1
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Flutter (Channel google3, on macOS 14.7
โข Framework revision 461ad0d3e0 (0 days ago), 2024-11-11T00:00:00.000
โข Engine revision d90e9f4718
โข Dart version ed9a5b1110
```
</details>
| framework,f: material design,a: accessibility,has reproducible steps,P2,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | low | Minor |
2,650,398,292 | vscode | VSCode trying to update WSL on launch for literally no reason |
Type: <b>Bug</b>
Every time I launch VS Code after the latest update, it opens a Terminal.exe that tries to update WSL. I don't have WSL installed at all on my system, and I don't have any exstensions that require or assume WSL to be installed. This is really annoying. Please fix.
```
Windows Subsystem for Linux must be updated to the latest version to proceed. You can update by running 'wsl.exe --update'.
For more information please visit https://aka.ms/wslinstall
Press any key to install Windows Subsystem for Linux.
Press CTRL-C or close this window to cancel.
This prompt will time out in 60 seconds.
```
VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i9-12900K (24 x 3187)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.70GB (46.93GB free)|
|Process Argv|--crash-reporter-id 21bd840c-17fe-4408-9d72-8de41b7f8c01|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (3)</summary>
Extension|Author (truncated)|Version
---|---|---
css-nesting-syntax-highlighting|jac|0.1.1
material-icon-theme|PKi|5.13.0
pico8-ls|Pol|0.5.7
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
h409b430:31179529
```
</details>
<!-- generated by issue reporter --> | bug,WSL,terminal-profiles | low | Critical |
2,650,404,289 | go | cmd/compile: consider giving a warning or an error if a PGO profile looks mismatched | For example, if I run `go build` on a Go codebase with a `default.pgo` file, and the PGO logic can see that the majority of the symbols mentioned in the profile are not present in the source code at all, it could give a warning or an error to the user because one of two things might have happened:
1) The profile was obtained for a different program with a fairly different codebase.
2) The profile is old and the codebase has changed enough where the profile is no longer representative.
In both cases, it's possible that PGO may still give some minor benefit, but it's hard to say. Most importantly, I think we should surface this to the user so that they can grab a new profile.
(this idea came up during a brief PGO discussion at the Go contributor summit at golab.io) | Thinking,NeedsInvestigation,compiler/runtime | low | Critical |
2,650,453,783 | electron | Allow to disable usage of GPU completely, preferably via ENV variable. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Related Discussions
* https://github.com/microsoft/vscode-test-cli/issues/61#issuecomment-2468514202
### Problem Description
I'm working on VS Code Extension and currently cleaning up CI pipelines, and there is one ERROR I'd like to get rid of:
```
โฆ โฌข [Docker] โฏ xvfb-run -a npm test
...
โ Validated version: 1.95.2
โ Found existing install in /workspaces/partcad-vscode-extension/.vscode-test/vscode-linux-x64-1.95.2
โ Validated version: 1.95.2
โ Found existing install in /workspaces/partcad-vscode-extension/.vscode-test/vscode-linux-x64-1.95.2
[13350:1110/065637.709894:ERROR:viz_main_impl.cc(166)] Exiting GPU process due to errors during initialization
[13443:1110/065638.091817:ERROR:viz_main_impl.cc(166)] Exiting GPU process due to errors during initialization
[main 2024-11-10T06:56:38.116Z] update#setState disabled
[main 2024-11-10T06:56:38.156Z] update#ctor - updates are disabled by the environment
[13638:1110/065638.657892:ERROR:viz_main_impl.cc(166)] Exiting GPU process due to errors during initialization
[13726:1110/065639.121195:ERROR:viz_main_impl.cc(166)] Exiting GPU process due to errors during initialization
Started local extension host with pid 13932.
...
```
If someone can point me to related part of the code where I can introduce feature flat that would also help a lot!
### Proposed Solution
* Introduce feature flag which would explicitly disable GPU usage.
* Allow setting feature flag via ENV variable.
### Alternatives Considered
* For local Dev Containers run I tried to mount gpus from host via docker run options but that didn't help with the error.
* For CI... I simply do not have access to GPU in current CI environments.
### Additional Information
Additional information how to make Electron be able to use host GPU in Dev Container locally would help a lot! | enhancement :sparkles: | low | Critical |
2,650,480,488 | kubernetes | Add statusz endpoint for kube-controller-manager | ### What would you like to be added?
Part of
https://github.com/kubernetes/enhancements/issues/4827
Add the /statuz endpoint for controller-manager.
Sample response:
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981+c6be932655a03b-dirty
Emulation version: 1.31.0-beta.0.981
Minimum Compatibility version: 1.30.0
List of useful endpoints
--------------
configz:/configz
healthz:/healthz
livez:/livez
metrics:/metrics
readyz:/readyz
sli metrics:/metrics/slis
/sig instrumentation
### Why is this needed?
Refer to [Motivation setcion](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/4827-component-statusz/README.md#motivation) in KEP. | kind/feature,sig/instrumentation,needs-triage | low | Minor |
2,650,497,665 | kubernetes | Add statusz endpoint for kube-proxy | ### What would you like to be added?
What would you like to be added?
Part of
https://github.com/kubernetes/enhancements/issues/4827
Add the /statuz endpoint for kube-proxy.
Sample response:
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981+c6be932655a03b-dirty
Emulation version: 1.31.0-beta.0.981
Minimum Compatibility version: 1.30.0
List of useful endpoints
--------------
configz:/configz
healthz:/healthz
livez:/livez
metrics:/metrics
readyz:/readyz
sli metrics:/metrics/slis
/sig instrumentation
### Why is this needed?
Refer to [Motivation setcion](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/4827-component-statusz/README.md#motivation) in KEP. | kind/feature,sig/instrumentation,needs-triage | low | Minor |
2,650,504,736 | langchain | langchain template list - 404 and never loads package | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```zsh
# langchain templates list
```
### Error Message and Stack Trace (if applicable)
```bash
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/mlosey/.local/pipx/venvs/langchain-cli/lib/python3.13/site-packages/langchain_cli/namespa โ
โ ces/template.py:152 in list โ
โ โ
โ 149 โ """ โ
โ 150 โ from langchain_cli.utils.github import list_packages โ
โ 151 โ โ
โ โฑ 152 โ packages = list_packages(contains=contains) โ
โ 153 โ for package in packages: โ
โ 154 โ โ typer.echo(package) โ
โ 155 โ
โ โ
โ โญโโโโ locals โโโโโโฎ โ
โ โ contains = None โ โ
โ โฐโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /Users/mlosey/.local/pipx/venvs/langchain-cli/lib/python3.13/site-packages/langchain_cli/utils/g โ
โ ithub.py:24 in list_packages โ
โ โ
โ 21 โ โ
โ 22 โ data = json.loads(res_str) โ
โ 23 โ package_names = [ โ
โ โฑ 24 โ โ p["name"] for p in data if p["type"] == "dir" and p["name"] != "docs" โ
โ 25 โ ] โ
โ 26 โ package_names_filtered = ( โ
โ 27 โ โ [p for p in package_names if contains in p] if contains else package_names โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ conn = <http.client.HTTPSConnection object at 0x1067daf90> โ โ
โ โ contains = None โ โ
โ โ data = { โ โ
โ โ โ 'message': 'Not Found', โ โ
โ โ โ 'documentation_url': โ โ
โ โ 'https://docs.github.com/rest/repos/contents#get-repository-content', โ โ
โ โ โ 'status': '404' โ โ
โ โ } โ โ
โ โ headers = { โ โ
โ โ โ 'Accept': 'application/vnd.github+json', โ โ
โ โ โ 'X-GitHub-Api-Version': '2022-11-28', โ โ
โ โ โ 'User-Agent': 'langchain-cli' โ โ
โ โ } โ โ
โ โ res = <http.client.HTTPResponse object at 0x1067c8550> โ โ
โ โ res_str = b'{"message":"Not โ โ
โ โ Found","documentation_url":"https://docs.github.com/rest/repos/c'+47 โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
TypeError: string indices must be integers, not 'str'
```
### Description
* I'm trying to use langchain-cli to list the available template packages
* I expect to see some sort of output that is a list of packages....
* Fails on 404, can't find the packages/templates.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
> Python Version: 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.2.43
> langsmith: 0.1.142
> langchain_cli: 0.0.31
> langserve: 0.2.3
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> fastapi: 0.115.4
> gitpython: 3.1.43
> gritql: 0.1.5
> httpx: 0.27.2
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 1.10.19
> pyproject-toml: 0.0.10
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> tomlkit: 0.12.5
> typer[all]: Installed. No version info available.
> typing-extensions: 4.12.2
> uvicorn: 0.23.2
| ๐ค:bug | low | Critical |
2,650,506,044 | flutter | iOS Voice Control number label shown on parent widget with Semantic label when child widget is actionable | ### Steps to reproduce
1. Create a parent widget, and wrap it in Semantics(label: 'Hello world)
2. Add a child widget with an action, like TextButton
3. Activate Voice Control on iOS
4. Say "Show numbers"
5. Notice a number is shown on the parent, and it has no actions
### Expected results
The parent AND child widgets show a number label, even though the parent has no actions.
### Actual results
Only child widget should show a number label, not the parent
### Code sample
```dart
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Repro examples'),
),
body: Semantics(
label: 'Example',
container: true,
child: Center(
child: TextButton(
child: const Text('Hello World'),
onPressed: () {},
),
),
),
),
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="400" src="https://github.com/user-attachments/assets/c8b44293-ad49-4512-9d6a-1caa9a5186c6">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| platform-ios,framework,a: accessibility,has reproducible steps,P1,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | medium | Major |
2,650,559,285 | go | proposal: net/http: optionally disable recover on panic | ### Proposal Details
Similar to the previous proposals in #16542 and #25245,
I propose a per `Server` opt out of panic recovering and printing.
While the automatic panic recovery was previously deemed regrettable, the above proposals declined for being somewhat backwards incompatible, especially when `ErrAbortHandler` was used. Therefore, I also propose a new method for `ResponseController` to cleanly close the connection, without interrupting the control flow. This allows the handler to perform any clean up necessary, which is more in line with normal Go error handling.
Glancing through the uses of `panic(http.ErrAbortHandler)` on Github: [search link](https://github.com/search?q=language%3AGo+panic%28http.ErrAbortHandler%29&type=code&p=1), I assert that these are almost all in modules where the handlers and server are developed together, so it should be relatively easy for users that wish to set `DisablePanicHandler = true` to also audit the handlers for any uses of `panic(http.ErrAbortHandler)`.
API additions:
```go
type Server struct{
// Disables the automatic recovery of panics
// and printing of stack traces in handlers.
// To close a connection, use [ResponseController.AbortConnection].
DisablePanicHandler bool
}
// Abort terminates the current request without sending a response.
// For HTTP/1 it closes the network connection,
// for HTTP/2 it sends a RST_STREAM. Handlers must not write to
// ResponseWriter after calling AbortConnection.
func (c *ResponseController) Abort()
```
cc @neild | Proposal | low | Critical |
2,650,567,715 | three.js | hapticActuators not working in XR sessions started with hand tracking used first | ### Description
If an XR session is started with hand tracking being used, and then in the middle of the session you pick up your controllers, the hapticActuators won't work. I believe the issue is specific to three.js as it does not occur here: https://cabanier.github.io/webxr-samples-1/input-selection-2.html
I find myself at a bit of a loss when trying to find the source of the issue unfortunately
### Reproduction steps
1. Start an immersive experience using hand tracking (make sure that the controllers are not also being tracked at this moment)
2. Pick up your controllers
3. Trigger some code that uses the inputSource's hapticActuators
### Code
N/A
### Live example
* [Discord User Moog's example](https://handtracking-no-haptic.glitch.me)
* [Three's Haptics Example](https://threejs.org/examples/?q=haptic#webxr_xr_haptics) (Testing here is pending #29860)
### Screenshots
_No response_
### Version
r170
### Device
Headset
### Browser
Chrome
### OS
Android | WebXR | low | Minor |
2,650,568,110 | pytorch | torch_dispatch x HOP handling | From discussion with @Chillee and @bdhirsh.
1. torch_dispatch mode should have a flag for if it accepts HOPs or not. If it does, then the HOP should get passed into the `__torch_dispatch__`.
2. We should give HOPs similar properties to ops. That is, they should have a .tag, the .opoverload field should probably point back at itself, etc.
3. HOPs should take in a flag for if they're "passthrough" or "operator like". Maybe we can give generic handling for the passthrough HOPs x torch_dispatch stuff (but open question as to how this actually works).
cc @ezyang @chauhang @penguinwu @ydwu4 @bdhirsh @yf225 | triaged,module: dispatch,oncall: pt2,module: higher order operators,module: pt2-dispatcher | low | Minor |
2,650,609,894 | angular | resource signal : have a way to make value() to persists while loading like when we use reload() | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
My typical use case is a resource loading data for a sortable table like this :
```typescript
readonly tableResource = resource({
request: () => ({
sort: this.sort(),
}),
loader: async ({ request }) => {
const { sort } = request;
const response = await fetch(`http://localhost/api/test?sort=${sort}`);
return await response.json() as TableData[];
}
});
```
On the first execution when the data is loaded, _tableResource.value()_ contains my initial data.
When the user sort the table, the _sort_ signal is changed therefore the tableResource reloads the data.
While the loader function is running, _tableResource.value()_ is set to **undefined**, then when the data is loaded _tableResource.value()_ contains my new sorted data.
I think this is the expected behavior but I have 2 problem with this :
1/ If the fetch function is slow to return a value, the user will see all the table content disappearing (because the resource value changes to undefined) before it shows again.
I think its best if the "old" data stays on the screen while it's loading (even if there is a loading indicator)
2/ If I want to use a linkedSignal with the resource value, it will be called twice for only one reload
### Proposed solution
I think It would be nice if there was a way to force value to persists while it's loading (like when we call the reload() function)
Maybe by using an option ?
Or by making the behavior of reload() the default behavior since we can test _status_ to know the validity of _value_
### Alternatives considered
Testing status() === ResourceStatus.Loading in a linkedSignal and use it in the template instead of using _tableResource.value_
```typescript
data: WritableSignal<TableData[]> = linkedSignal({
source: () => ({
value: this.tableResource.value(),
status: this.tableResource.status(),
}),
computation: (source, previous) => {
if (previous && source.status === ResourceStatus.Loading) {
return previous.value;
}
return source.value ?? [];
}
});
``` | area: core,core: reactivity,cross-cutting: signals | medium | Major |
2,650,611,399 | rust | Tracking issue for RFC 3458: Unsafe fields | This is a tracking issue for:
- https://github.com/rust-lang/rfcs/pull/3458
The feature gate for the issue is `#![feature(unsafe_fields)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Approve as lang experiment.
- [ ] Accept an RFC.
- https://github.com/rust-lang/rfcs/pull/3458
- [ ] Implement in nightly.
- https://github.com/rust-lang/rust/pull/132915
- https://github.com/rust-lang/rust/pull/133934
- https://github.com/rust-lang/rust/pull/134008
- [ ] Add rustdoc support
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Related
TODO.
cc @jswrenn @veluca93 @jhpratt @rust-lang/lang
| T-lang,C-tracking-issue,B-experimental,F-unsafe_fields | low | Critical |
2,650,617,708 | TypeScript | Change the behavior of `tsc` on a tsconfig solution | ### ๐ Search Terms
The Vite starters now use a "solution" tsconfig, as documented in the docs.
The issue is that a lot of people are used to run `tsc` or `tsc --noEmit` at the root of the project, and now, this silently does nothing.
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
I think in that specific case, it should do one of those two things:
- make it an error in that case with "You probably want tsc -b"
- default to build mode (when the tsconfig contains `files: []`)
### ๐ Motivating Example
You have three chars less to type? ๐
More seriously, it's about making project references more used without a familiar command becoming useless
### ๐ป Use Cases
2. What shortcomings exist with current approaches?
We get some reports like this: https://github.com/vitejs/vite/issues/17585
I suspect a lot more people fall for this, but find the issue before opening an issue
| Suggestion,In Discussion | low | Critical |
2,650,620,917 | three.js | GLTFExporter: Provide a way to export a mesh with multiple primitives | ### Description
Currently when exporting a model with GLTFLoader all geometry-material pairs are separated into different "mesh" nodes that all have a single primitives in the final GLTF file. This sacrifices a lot of control over the final file, though, since GLTF can support multiple primitives (with unique material per primitive) per mesh. When just working in three.js this may be okay but other tools interpret these multi-primitive meshes differently. Unreal, for example, will load them as a single object with multiple materials assigned which is significantly easier for an artist to work with.
My current situation is that I'm trying to simplify a complex model and merge meshes components into semantic components with multiple materials for use in Unreal using three.js but the exporter doesn't allow for control over primitives. The reason I'm using three.js is because merging geometry and reasoning about & adjusting the hierarchy is simpler than other tools I've used.
cc @donmccurdy
### Solution
**Provide GLTFPrimitive, GLTFMesh Subclasses**
When importing a GLTF two `GLTFPrimitive` and `GLTFMesh` subclasses could be provided to better represent the original structure of the imported GLTF. Then the GLTFExporter could use these classes to export the structure reflected by these classes. This would allow for more user control of the structure, as well, since they could be created as-needed. Alternatively a flag in `userData` could be added to support the same thing.
### Alternatives
I've looked into gltf-transform for this but managing the hierarchy is not as clear or easy to visualize.
### Additional context
_No response_ | Enhancement | low | Major |
2,650,643,233 | terminal | Add options to control the new Builtin Box Drawing Glyphs | ### Description of the new feature
Since the new Builtin Box Drawing Glyph feature is parametric, exposing some of it to the user could be beneficial.
Specifically Line Width and Roundiness is most useful if a user wants to emulate the look of the original box drawing characters.
### Proposed technical implementation details
_No response_ | Help Wanted,Area-Settings,Product-Terminal,Issue-Task | low | Minor |
2,650,670,068 | godot | ENet Node not found errors | ### Tested versions
I'm pretty sure it can be reproduced in any version.
### System information
Windows 11 64bit, Godot 4.3 stable win64
### Issue description
RPC calls can arrive to clients before the node is replicated and after it's freed, causing 'get_node: Node not found: NodePath' errors.
### Steps to reproduce
1. Create a basic server/client template
2. Create a simple test node which calls an rpc function every frame
3. Instantiate the test node during the game
4. Observe pre-replication Node not found errors
5. Wait a few seconds
6. Free the test node
7. Observe post-freedom Node not found errors
Alternative steps for the MRP:
1. Run the project
2. Observe errors
### Minimal reproduction project (MRP)
[node-not-found-errors.zip](https://github.com/user-attachments/files/17709175/node-not-found-errors.zip)
| bug,topic:network | low | Critical |
2,650,670,287 | pytorch | AWS A100 runners reliability issue | Compiler team reports that there are still some reliability issues with AWS A100 where some runners start to crash since last weekend. For example,
* https://github.com/pytorch/pytorch/actions/runs/11763098772
* https://github.com/pytorch/pytorch/actions/runs/11773760797/attempts/1
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @pytorch/pytorch-dev-infra @jeanschmidt @ZainRizvi @desertfire
Also @yangw-dev as this might be an interesting case for utilization monitoring because:
* Runner lost communication with server usually indicates OOM or a botched runner update. Monitoring memory usage would help with OOM
* The hardware is shared, but each runner runs independently in its own cgroups / docker container https://fb.workplace.com/groups/pytorch.dev.perf.infra.teams/permalink/9137566856262424. We probably want to monitor each of them independently | high priority,module: ci,triaged | low | Critical |
2,650,680,348 | next.js | Conditionally Rendering Form Component Causes Server Action Error | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/elastic-raman-mdp554
### To Reproduce
I attached the CodeSandbox for a minimal reproduction. I am purposely calling the action from onSubmit so that I can do both client-side and server-side validation. I removed explicit validation parts as it reproduces without it.
```
<form
ref={formRef}
action={formAction}
onSubmit={() => {
formRef.current?.submit();
}}
>
<div>
<label htmlFor="email">Your email</label>
<input id="email" name="email" type="email" required />
</div>
<button type="submit">Reset password</button>
</form>
```
In my reproduction, I have 2 forms. If I use the `useState()` toggle to turn on/off the other form, I will get the following error:
<img width="1233" alt="image" src="https://github.com/user-attachments/assets/fd0d39eb-2493-4165-9e11-785aa806a8b4">
If I don't have the toggle, I won't get the error and I will see the server action complete:
<img width="258" alt="image" src="https://github.com/user-attachments/assets/00481aad-e208-4e62-aa22-e300db0dd6d5">
### Current vs. Expected behavior
I expect server actions to work regardless of whether there is conditional rendering of the component or not.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.6 // Latest available version is detected (15.0.4-canary.6).
eslint-config-next: N/A
react: 19.0.0-rc-5c56b873-20241107
react-dom: 19.0.0-rc-5c56b873-20241107
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Someone else encountered the same scenario here: https://github.com/vercel/next.js/discussions/56234#discussioncomment-9554752 | bug,Runtime | low | Critical |
2,650,686,043 | rust | non-camel-case-types lint does not flag `repr(C)` types | ### Code
```Rust
#![forbid(non_camel_case_types)]
#[repr(C)]
pub struct foo_bar {}
```
### Current output
```Shell
<no error>
```
### Desired output
```Shell
Compiling playground v0.0.1 (/playground)
error: type `foo_bar` should have an upper camel case name
--> src/lib.rs:4:12
|
4 | pub struct foo_bar {}
| ^^^^^^^ help: convert the identifier to upper camel case: `FooBar`
|
note: the lint level is defined here
--> src/lib.rs:1:11
|
1 | #![forbid(non_camel_case_types)]
| ^^^^^^^^^^^^^^^^^^^^
error: could not compile `playground` (lib) due to 1 previous error
```
### Rationale and extra context
This was clearly an explicit design choice at one point: <https://github.com/rust-lang/rust/blob/81eef2d362a6f03db6f8928f82d94298d31eb81b/compiler/rustc_lint/src/nonstandard_style.rs#L170>
Presumably, the idea was that you'd have a bunch of FFI types with their native C names, and that's what `repr(C)` is for. You wouldn't want this lint firing for such types.
But I don't think it's one that makes sense today. `repr(C)` is used any time you want a stable layout, not just when you're defining a C type for FFI. The lint should fire in all cases, and crates that define a bunch of C types should silence the lint where appropriate.
### Other cases
```Rust
```
### Rust Version
```Shell
> rustc --version --verbose
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
### Anything else?
_No response_ | A-lints,T-lang,T-compiler,C-discussion,A-repr,L-non_camel_case_types | low | Critical |
2,650,691,649 | langchain | Neo4jVector.from_existing_index() doesn't error if given an invalid index_name | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code attempts to create a Neo4jVector from an existing index which doesn't exist (but other indexes do).
It fails silently returning a random index.
```python
try:
self.neo4j_vector = Neo4jVector.from_existing_index(
graph=self.neo4j_graph,
embedding=embeddings,
index_name="this-index-does-not-exist",
)
except Exception as e:
print(f"Neo4jVector.from_existing_index failed:{e}")
raise e
print(f"index returned: {self.neo4j_vector.index_name}")
```
yields output
```console
index returned: vector
```
(I have another index in the DB called 'vector')
No exception is raised nor is the return value from `from_existing_index()` equal to NULL.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The best place to resolve this seems to be in in `neo4j_vector.py:sort_by_index_name()`
Currently:
```python
def sort_by_index_name(
lst: List[Dict[str, Any]], index_name: str
) -> List[Dict[str, Any]]:
"""Sort first element to match the index_name if exists"""
return sorted(lst, key=lambda x: x.get("name") != index_name)
```
If we return an empty list in the case where `index_name` isn't there, the error handling logic in caller functions works as expected, and the issue can be detected at the user application level:
```python
def sort_by_index_name(
lst: List[Dict[str, Any]], index_name: str
) -> List[Dict[str, Any]]:
if not index_name in [el.get("name") for el in lst]:
return []
"""Sort first element to match the index_name if exists"""
return sorted(lst, key=lambda x: x.get("name") != index_name)
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.3.5
> langsmith: 0.1.142
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.3.0
> langchain_together: 0.2.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.46.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,650,705,824 | rust | build and target aren't distinguished during llvm building | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Cross-compiling rust 1.82.0 itself fails with custom LLVM used on the build and vendored llvm on the host.
```
$ ./x.py build -j30 -v
...
-- Installing: /root/la64_rust/rustc-1.82.0-src/build/loongarch64-unknown-linux-musl/llvm/lib/cmake/llvm/LLVMConfigExtensions.cmake
cargo:root=/root/la64_rust/rustc-1.82.0-src/build/loongarch64-unknown-linux-musl/llvm
running: "/root/la64_rust/rustc-1.82.0-src/build/x86_64-unknown-linux-musl/llvm/bin/llvm-config" "--version" (failure_mode=Exit) (created at src/core/build_steps/llvm.rs:508:17, executed at src/core/build_steps/llvm.rs:508:60)
Command "/root/la64_rust/rustc-1.82.0-src/build/x86_64-unknown-linux-musl/llvm/bin/llvm-config" "--version" (failure_mode=Exit) did not execute successfully.
It was not possible to execute the command: Os { code: 2, kind: NotFound, message: "No such file or directory" }
Traceback (most recent call last):
File "/root/la64_rust/rustc-1.82.0-src/./x.py", line 50, in <module>
bootstrap.main()
File "/root/la64_rust/rustc-1.82.0-src/src/bootstrap/bootstrap.py", line 1208, in main
bootstrap(args)
File "/root/la64_rust/rustc-1.82.0-src/src/bootstrap/bootstrap.py", line 1184, in bootstrap
run(args, env=env, verbose=build.verbose, is_bootstrap=True)
File "/root/la64_rust/rustc-1.82.0-src/src/bootstrap/bootstrap.py", line 195, in run
raise RuntimeError(err)
RuntimeError: failed to run: /root/la64_rust/rustc-1.82.0-src/build/bootstrap/debug/bootstrap build -j30 -v
```
`config.toml` is configured as
```toml
change-id = 129295
profile = "user"
[llvm]
link-shared = true
static-libstdcpp = false
use-libcxx = true
targets = "X86;LoongArch;WebAssembly"
[build]
build = "x86_64-unknown-linux-musl"
host = ["loongarch64-unknown-linux-musl"]
target = ["loongarch64-unknown-linux-musl"]
cargo = "/usr/bin/cargo"
rustc = "/usr/bin/rustc"
rustfmt = "/usr/bin/rustfmt"
locked-deps = true
vendor = true
tools = ["cargo"]
sanitizers = false
profiler = true
# Generating docs fails with the wasm32-* targets
docs = false
[install]
prefix = "/usr"
[rust]
debuginfo-level-std = 2
channel = "stable"
description = "eweOS rust %RUSTVER%"
rpath = false
backtrace-on-ice = true
remap-debuginfo = true
jemalloc = false
llvm-libunwind = "system"
codegen-units-std = 1
deny-warnings = false
lld = false
musl-root = "/usr"
[target.x86_64-unknown-linux-musl]
crt-static = false
llvm-config = "/usr/bin/llvm-config"
[target.loongarch64-unknown-linux-musl]
crt-static = false
cc="clang"
cxx="clang++"
linker="/usr/bin/loongarch64-unknown-linux-musl-clang"
musl-root="/opt/sysroot-loongarch64"
musl-libdir="/opt/sysroot-loongarch64/usr/lib"
```
The error occurs in `src/bootstrap/src/core/build_steps/llvm.rs: find_llvm_lib_name()`[1],
```rust
let find_llvm_lib_name = |extension| {
let version =
command(&res.llvm_config).arg("--version").run_capture_stdout(builder).stdout();
let major = version.split('.').next().unwrap();
match &llvm_version_suffix {
Some(version_suffix) => format!("libLLVM-{major}{version_suffix}.{extension}"),
None => format!("libLLVM-{major}.{extension}"),
}
};
```
which tries to exec `llvm-config` of LLVM built for the build, but we don't acutally build it according to the configuration.
This problematic path is generated in `prebuilt_llvm_config()`[2]
```rust
let mut llvm_config_ret_dir = builder.llvm_out(builder.config.build);
llvm_config_ret_dir.push("bin");
let build_llvm_config = llvm_config_ret_dir.join(exe("llvm-config", builder.config.build));
```
[1]:
https://github.com/rust-lang/rust/blob/1.82.0/src/bootstrap/src/core/build_steps/llvm.rs#L521-L530
[2]: https://github.com/rust-lang/rust/blob/1.82.0/src/bootstrap/src/core/build_steps/llvm.rs#L118-L120 | A-cross,T-bootstrap,C-bug | low | Critical |
2,650,719,349 | go | cmd/go: TestScript/fileline failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/fileline"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731521742206735809)):
=== RUN TestScript/fileline
=== PAUSE TestScript/fileline
=== CONT TestScript/fileline
script_test.go:139: 2024-11-11T23:59:59Z
script_test.go:141: $WORK=/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-2262759160/tmpdir2484956214/fileline2287564947
script_test.go:163:
PATH=/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-2262759160/tmpdir2484956214/testbin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/cache/tools/bin:/Users/swarming/.swarming/w/ir/bbagent_utility_packages:/Users/swarming/.swarming/w/ir/bbagent_utility_packages/bin:/Users/swarming/.swarming/w/ir/cipd_bin_packages:/Users/swarming/.swarming/w/ir/cipd_bin_packages/bin:/Users/swarming/.swarming/w/ir/cipd_bin_packages/cpython3:/Users/swarming/.swarming/w/ir/cipd_bin_packages/cpython3/bin:/Users/swarming/.swarming/w/ir/cache/cipd_client:/Users/swarming/.swarming/w/ir/cache/cipd_client/bin:/Users/swarming/.swarming/cipd_cache/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
HOME=/no-home
CCACHE_DISABLE=1
GOARCH=amd64
...
> ! go run ../../gopath/x/y/z/err.go
[stderr]
../x/y/z/err.go:1:22: cannot find package "bar" in any of:
/Users/swarming/.swarming/w/ir/x/w/goroot/src/bar (from $GOROOT)
/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-2262759160/tmpdir2484956214/fileline2287564947/gopath/src/bar (from $GOPATH)
[exit status 1]
> stderr ^..[\\/]x[\\/]y[\\/]z[\\/]err.go:
matched: ../x/y/z/err.go:1:22: cannot find package "bar" in any of:
script_test.go:405: go was invoked but no counters were incremented
--- FAIL: TestScript/fileline (0.60s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,650,724,905 | tauri | [feat] Push Notifications | ### Describe the problem
Several platforms supported by Tauri - macOS, iOS, and Android, to name a few - have some kind of built-in push notifications system. For Apple systems, of course there is [APNS](https://developer.apple.com/documentation/usernotifications/registering-your-app-with-apns), and for Android and Firebase-enabled apps, there is [FCM](https://firebase.google.com/docs/cloud-messaging).
Tauri isn't yet able to support these APIs, as far as I can tell. On Apple platforms, the developer must [register a method with the `AppDelegate` and call a method on `NSApplication`](https://developer.apple.com/documentation/usernotifications/registering-your-app-with-apns), which isn't easy to do from Rust. iOS uses a similar flow, but with `UIApplication` instead.
Push Notifications are likely to be a common need for Tauri developers, especially on mobile platforms.
### Describe the solution you'd like
I think Tauri should add calls to the `UIApplication` and/or `NSApplication` to register for push notification support on behalf of the developer; or, Tauri should make available such calls so the developer can implement support themselves.
### Alternatives considered
1) I looked for plugins and found none
2) I looked for easy Rust bindings to register for APNS, and found none because these APIs use `{NS,UI}Application`
3) I looked into alternative push systems (3rd-party), but these aren't integrated well with underlying operating systems like Android and iOS, as compared to native push infrastructure
### Additional context
### General support
In most cases, "registering" for these push systems consists of a few relatively simple steps:
1) "Registering" or "requesting" a token from the application, using system APIs, potentially within the app entrypoint
2) "Receiving" the token via some callback or delegate mechanism (the token is typically just some raw bytes or a string)
3) Making the received token available to the developer
- Usually the developer will want to send this token to a server
- Callbacks or even just a method to retrieve the latest push token (if any) would work fine for callers, probably
### Apple Platforms
**macOS**
It looks like [`tao` already sets up an `AppDelegate`](https://github.com/tauri-apps/tao/blob/88f0b4190f77d215f13e88f826c6b5bf06cafe57/src/platform_impl/macos/app_delegate.rs#L103-L107), where we would need to call [`registerForRemoteNotifications`](https://developer.apple.com/documentation/appkit/nsapplication/2967172-registerforremotenotifications), and then receive the token at the delegate method [`application(_:didRegisterForRemoteNotificationsWithDeviceToken:)`](https://developer.apple.com/documentation/appkit/nsapplicationdelegate/1428766-application).
**iOS**
Nearly identical to macOS, but with [`UIApplication.registerForRemoteNotifications()`](https://developer.apple.com/documentation/uikit/uiapplication/1623078-registerforremotenotifications) and [the equivalent delegate method](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1622958-application).
### Android
Firebase uses a `Service` on Android to implement messaging within an app. This service is declared in the `AndroidManifest.xml` and then implemented within app code; the service:
- Requests a token
- Receives the token/receives updated tokens
- Receives pushed messages directed to the app
This would be implemented in Kotlin as part of an Android mobile plugin for Tauri. Luckily there are no changes anticipated for `tao` or `tauri` itself; Android push can happen through the plugin alone.
### Windows
Windows apparently has [Windows Notifications System (WNS)](https://learn.microsoft.com/en-us/windows/apps/windows-app-sdk/notifications/push-notifications/push-quickstart), which is accessible via the [`windows`](https://microsoft.github.io/windows-docs-rs/doc/windows/?search=PushNotifications) crate. So far it appears to function similar to Android, in that a library can obtain a push channel without special code in the app entrypoint.
### Linux
I'm not aware of native push infrastructure for Linux. Other SDKs can be used on Linux which do not require instrumentation in the app entrypoint.
| type: feature request | medium | Critical |
2,650,729,697 | PowerToys | Mouse zoom utility. | ### Description of the new feature / enhancement
A mouse utility that lets users zoom in on small text and content, improving readability and focus for better visualization.
### Scenario when this would be used?
A mouse utility that enables users to quickly zoom in on small text and content, improving readability for tasks that involve fine details, like inspecting console log messages or object fields in web development. This eliminates the need to constantly zoom the browser or strain to read small text.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,650,770,585 | pytorch | Halide CPU backend numerical issue for inplace add | ### ๐ Describe the bug
Found this bug when I debug a test failure in https://github.com/pytorch/pytorch/pull/140249
```
import torch
from torch._inductor import config
config.cpu_backend = "halide"
@torch.compile
def f(x, y):
return x.add_(y)
x = torch.ones([2, 12, 13, 17]).transpose(1, 2)
y = torch.ones([2, 13, 1, 17])
f(x, y)
print(f"{x.numel()} {x.sum()}")
```
Correct result should be:
```
5304 10608.0
```
But the halide CPU backend generates:
```
5304 10680.0
```
### Error logs
.
### Versions
.
cc @ezyang @chauhang @penguinwu @jansel | triaged,oncall: pt2,topic: inductor halide backend | low | Critical |
2,650,776,768 | pytorch | Flaky bot might be missing some data when making a judgement | ### ๐ Describe the bug
See https://github.com/pytorch/pytorch/issues/139057 and https://github.com/pytorch/pytorch/issues/137771 that were closed today as no longer reproduced, only to be reopened soon afterwards as still happening
Which hints at some gaps in the logic that tracks whether or disabled tests are actually healthy
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,650,782,791 | neovim | Terminal cleanup broken when resizing text in tmux, In WSL2 | ### Problem
**Description**:
Found an issue with how neovim cleans up the terminal screen when exiting after screen resize, but only happens inside tmux (or possibly other buffer based scroll-restricted environments).
**Current behavior**:
Neovim only partially cleans the screen, showing some of what was previously displayed instead of properly restoring the terminal state, when terminal got resized inside `nvim`.
**Theory**:
This may occurs because `tmux` stores output in a buffer and only shows the visible portion. `neovim` likely uses only the displayed content for cleanup and reprints, causing it to miss parts of the buffer outside the viewport. This discrepancy result in incomplete screen restoration when exiting `neovim` within `tmux` if the screen was resized inside `neovim`.
**Possible Solution**:
If `neovim` could account for the full buffer `tmux` holds (not just the visible portion), it might better handle cleanup and restore the terminal state consistently. I believe vim already does that, cause this doesn't happen with vim.
**Environment**:
- Only tested on WSL ( but I belive this would be same on actual linux distros too )
- Happens consistently in tmux
- Might affect other scroll-restricted environments
**Screenshots**:

> Let me know if you need any other info or want me to test something specific!
### Steps to reproduce
1. Start a tmux session
2. Have some terminal history (like 4-5 commands) above for better visibility
3. Open neovim (can be either empty buffer or existing file)
4. Resize the screen size (using ctrl-minus or equivalent zoom out)
5. Fill the screen with content (even empty lines work)
6. Exit neovim with :q!
### Expected behavior
**Expected behavior**:
Neovim should properly clean up the entire screen and restore the terminal state correctly, regardless of screen resize operations.
### Nvim version (nvim -v)
NVIM v0.11.0-dev
### Vim (not Nvim) behaves the same?
no, vim 8.2 ( only nvim failed even with `nvim -u DEFAULTS` )
### Operating system/version
Windows 11 ( WSL2 - Ubuntu )
### Terminal name/version
Windows terminal
### $TERM environment variable
xterm-256color
### Installation
System Package Manager (APT) | bug,tui | low | Critical |
2,650,786,446 | svelte | Svelte 5 and CSP | @eltigerchino please find a minimal reproduction attempt [here](https://github.com/mulder999/svelte-bug-12879)
This is basically the standard sveltekit demo and adjustment were made in commit `92df14f` (static adapter + CSP).
To reproduce:
1. `npm run build`
2. `npm run preview`
3. Open project home page and start the browser console
4. Reload page -> csp style issue
5. Click on a button -> more csp style issue
On my large project, I also have inline scripts issues but this seems not visible with this minimalistic project.
With svelte4 using `"self"` and `"blob:"` for `script-src` were enough.
_Originally posted by @mulder999 in https://github.com/sveltejs/kit/discussions/12879#discussioncomment-11055550_ | bug | low | Critical |
2,650,790,769 | pytorch | inconsistency in ```torch.special.i0e``` on CPU and GPU | ### ๐ Describe the bug
testing the function torch::special::i0e() with a bfloat16 tensor for the consistency between CPU and GPU.
```python #
#include <iostream>
#include <torch/torch.h>
int main() {
torch::Tensor self = torch::tensor(
{
{
{
{{-0.1064, 0.7500}, {-1.7109, -2.1562}},
{{-0.1641, 1.4062}, {-0.1582, 2.5312}},
{{0.5391, 1.3203}, {-0.6133, -0.2676}}
},
{
{{-0.5820, -0.8281}, {-0.1216, 0.3105}},
{{-1.3672, 0.2715}, {1.2656, -0.6094}},
{{2.4688, 0.5000}, {1.5156, 0.0405}}
}
},
{
{
{{1.1250, 0.7031}, {-0.5469, -1.0547}},
{{2.2344, -0.4668}, {-0.2412, -1.0781}},
{{-2.2031, -0.8320}, {0.0942, 0.6484}}
},
{
{{0.7500, -2.4531}, {-1.1875, 0.7422}},
{{0.2793, 0.1196}, {0.3809, 0.4785}},
{{1.3203, -0.3047}, {0.3262, 0.1484}}
}
}
},
torch::kBFloat16);
auto result_cpu = torch::special::i0e(self);
torch::Tensor self_cuda = self.cuda();
auto result_gpu = torch::special::i0e(self_cuda);
std::cout << "initialized tensor (CPU):\n" << self << std::endl;
std::cout << "CPU result: \n" << result_cpu << std::endl;
std::cout << "GPU result: \n" << result_gpu << std::endl;
bool inconsistent = !torch::allclose(result_cpu, result_gpu.cpu(), 1e-03, 1e-02);
std::cout << "inconsistency with atol=1e-02 and rtol=1e-03: " << std::boolalpha << inconsistent << std::endl;
return 0;
}
```
outputs (the values in matrices (2,2,1) and (2,2,2) show notable differences in results between CPU and GPU):
```
initialized tensor (CPU):
(1,1,1,.,.) =
-0.1064 0.7500
-1.7109 -2.1562
(2,1,1,.,.) =
1.1250 0.7031
-0.5469 -1.0547
(1,2,1,.,.) =
-0.5820 -0.8281
-0.1216 0.3105
(2,2,1,.,.) =
0.7500 -2.4531
-1.1875 0.7422
(1,1,2,.,.) =
-0.1641 1.4062
-0.1582 2.5312
(2,1,2,.,.) =
2.2344 -0.4668
-0.2412 -1.0781
(1,2,2,.,.) =
-1.3672 0.2715
1.2656 -0.6094
(2,2,2,.,.) =
0.2793 0.1196
0.3809 0.4785
(1,1,3,.,.) =
0.5391 1.3203
-0.6133 -0.2676
(2,1,3,.,.) =
-2.2031 -0.8320
0.0942 0.6484
(1,2,3,.,.) =
2.4688 0.5000
1.5156 0.0405
(2,2,3,.,.) =
1.3203 -0.3047
0.3262 0.1484
[ CPUBFloat16Type{2,2,3,2,2} ]
CPU result:
(1,1,1,.,.) =
0.9023 0.5430
0.3398 0.2949
(2,1,1,.,.) =
0.4355 0.5586
0.6211 0.4512
(1,2,1,.,.) =
0.6055 0.5156
0.8906 0.7500
(2,2,1,.,.) =
0.5391 0.2734
0.4180 0.5391
(1,1,2,.,.) =
0.8555 0.3828
0.8594 0.2676
(2,1,2,.,.) =
0.2891 0.6602
0.7969 0.4473
(1,2,2,.,.) =
0.3887 0.7773
0.4062 0.5938
(2,2,2,.,.) =
0.7656 0.8750
0.7148 0.6484
(1,1,3,.,.) =
0.6250 0.3965
0.5938 0.7773
(2,1,3,.,.) =
0.2891 0.5156
0.9102 0.5781
(1,2,3,.,.) =
0.2715 0.6445
0.3652 0.9609
(2,2,3,.,.) =
0.3984 0.7578
0.7383 0.8594
[ CPUBFloat16Type{2,2,3,2,2} ]
GPU result:
(1,1,1,.,.) =
0.9023 0.5430
0.3398 0.2949
(2,1,1,.,.) =
0.4355 0.5586
0.6211 0.4512
(1,2,1,.,.) =
0.6055 0.5156
0.8906 0.7500
(2,2,1,.,.) =
0.5430 0.2734
0.4219 0.5430
(1,1,2,.,.) =
0.8555 0.3828
0.8594 0.2676
(2,1,2,.,.) =
0.2891 0.6602
0.7969 0.4473
(1,2,2,.,.) =
0.3887 0.7773
0.4062 0.5938
(2,2,2,.,.) =
0.7695 0.8906
0.7070 0.6562
(1,1,3,.,.) =
0.6250 0.3965
0.5938 0.7773
(2,1,3,.,.) =
0.2910 0.5156
0.9102 0.5781
(1,2,3,.,.) =
0.2715 0.6445
0.3652 0.9609
(2,2,3,.,.) =
0.3965 0.7539
0.7422 0.8672
[ CUDABFloat16Type{2,2,3,2,2} ]
inconsistency with atol=1e-02 and rtol=1e-03: true
```
### Versions
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 16.0.4 (https://github.com/llvm/llvm-project ae42196bc493ffe877a7e3dff8be32035dea4d07)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.10
Is CUDA available: N/A
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.78
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==3.8.4
[pip3] numpy==1.19.2
[pip3] numpydoc==1.1.0
[pip3] torch==2.2.0a0+git9fa3350
[conda] blas 1.0 mkl
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] torch 2.2.0a0+git9fa3350 dev_0
cc @ptrblck @msaroufim @mruberry @kshitij12345 | module: numerical-stability,module: cuda,triaged,module: bfloat16,module: special | low | Critical |
2,650,825,791 | tauri | [bug] It still doesn't display the window borders correctly on Windows 10. | ### Describe the bug
When I set decorations to false, the initial window has no left or bottom border. However, when I resize the window, they appear. But it's noticeable that their color is differentโslightly darker.


### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
โ WebView2: 130.0.2849.80
โ MSVC: Visual Studio Community 2022
โ rustc: 1.82.0 (f6e511eec 2024-10-15)
โ cargo: 1.82.0 (8f40fc59f 2024-08-21)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.13.1
- pnpm: 9.12.3
- yarn: 1.22.22
- npm: 10.8.1
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.0
- tao ๐ฆ: 0.30.8
- tauri-cli ๐ฆ: 2.0.2
- @tauri-apps/api ๎: 2.1.1
- @tauri-apps/cli ๎: 2.0.2 (outdated, latest: 2.1.0)
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.2
- @tauri-apps/plugin-shell ๎: 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: SolidJS
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: Windows,status: needs triage | low | Critical |
2,650,829,829 | neovim | 'listchars': distinguish CR/LF; don't show EOL char if newline missing at EOF | ### Problem
Currently, Neovim only has `eol` option. `eol` canโt express CR and LF. Even if a text file doesnโt have end-of-file LF, it shows the `eol` character.
Related issue for Vim: https://github.com/vim/vim/issues/6981
### Expected behavior
In Unix-like systems which use LF as a new line,
* if there is no LF in the end of the file, no eol character is shown.
* listchars has an option to set the character for the hidden CR. | enhancement,needs:vim-patch,options | low | Minor |
2,650,917,787 | node | Add a way to get the enabled conditions | ### What is the problem this feature will solve?
Currently there is no way to get the enabled conditions easily. It requires checking `process.env.NODE_OPTIONS` and `process.execArgv` which is not robust. It should be noted that although it is an edge case, the `process.env.NODE_OPTIONS` check should be done before any code that modifies that variable.
Some concrete cases:
1. Give custom resolvers to imitate Node.js's resolution (this one can be solved if `parent` parameter of `import.meta.resolve` is out of experimental, but I guess exposing `conditions` is less controversial than graduating that API from experimental)
- Vite throws an friendly error when an optional dependency is not installed. To do that, Vite wants to check if a package is installed without using `require` or `import` so it uses the custom resolver. If Vite uses `require` or `import`, the resolve result is gets cached and requires the program to be restarted.
- Vite bundles the config to detect the files used for the config to restart Vite when those files are modified. When bundling the config, Vite rewrites the specifies to absolute path so that it points the same path before the bundling.
- yarn seems to parse them on their own: https://github.com/yarnpkg/berry/blob/f59bbf9f3828865c14b06a3e5cc3ae284a0db78d/packages/yarnpkg-pnp/sources/loader/makeApi.ts#L199
- ts-node seems to parse them on their own: https://github.com/TypeStrong/ts-node/blob/ddb05ef23be92a90c3ecac5a0220435c65ebbd2a/dist-raw/node-internal-modules-cjs-helpers.js#L15
2. Let frameworks to show a warning when the user didn't set required / expected conditions.
- For example, if frameworks expect users to set `custom` conditions, to show a warning they have to parse the conditions to do that for now.
### What is the feature you are proposing to solve the problem?
Add `conditions` variable in `node:module`.
Example code:
```js
// node -C development ./foo.mjs
import { conditions } from 'node:module';
console.log(conditions) // ['development']
```
I didn't add `node` to `conditions` in the example above, but maybe it makes sense to add that too.
But for `import` and `require`, I'm not sure if those should be added in that case. Probably the option would be to add both or neither.
### What alternatives have you considered?
- Exposing `getOptionValue` (#36935): this one got stale because it is difficult to make the API stable. | module,feature request | low | Critical |
2,650,932,167 | pytorch | DISABLED test_comprehensive_cross_cuda_float32 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_cross_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32836195920).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_cross_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1395, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
^^^^^^^^
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=Tensor[size=(5, 3, 5), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(5, 3, 5), device="cuda:0", dtype=torch.float32]], kwargs={'dim': '1'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_cross_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor_opinfo.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,650,936,064 | create-react-app | problem with jest matchers | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please use this template and fill in as many fields below as you can.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Describe the bug
I tried to generate a new project using yarn create react-app my-project --template typescript. But I have a jest matcher errors right away.
### Environment
local dev
### Steps to reproduce
yarn create react-app my-project --template typescript
cd my-project
yarn start

Thanks!
| needs triage,issue: bug report | low | Critical |
2,651,042,535 | vscode | Do not show publisher information for workspace extensions install recommendation notification | IIRC this used to say Microsoft instead of ms-vscode

| feature-request,extensions | low | Major |
2,651,042,984 | godot | Loss of float precision when using `save as` on a text resource or scene | ### Tested versions
current master @ ec6a1c0e792ac8be44990749800a4654a293b9ee
### System information
macOS 14.5.0, M2 Pro, Vulkan
### Issue description
Re-saving a text scene/resource as another text scene/resource with float properties results in a loss of precision when the float has a precision of >6.
This is an issue with [String::num_scientific](https://github.com/godotengine/godot/blob/ec6a1c0e792ac8be44990749800a4654a293b9ee/core/string/ustring.cpp#L1960);
[The default for `printf('%lg')` is 6 precision](https://en.cppreference.com/w/c/io/fprintf).
### Steps to reproduce
Open the MRP an editor
Re-save main.tscn as `main-2.tscn`
Observe that MobTimer.wait_time has reduced in precision from 0.570710678118 to 0.570711
### Minimal reproduction project (MRP)
[dodge_the_creeps.zip](https://github.com/user-attachments/files/17711296/dodge_the_creeps.zip)
| bug,topic:core | low | Minor |
2,651,050,043 | vscode | Support IME contextual conversion (pre- and post-conversion references) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
## Summary
This request is for Visual Studio Code to support the IME (Input Method Editor) feature known as "contextual conversion." This function, crucial for East Asian languages like Japanese and Chinese, enables IMEs to refer to surrounding text to suggest accurate conversions based on context, enhancing both accuracy and typing efficiency.
## Explanation and Benefits
IME contextual conversion allows the IME to use surrounding text (both before and after the input) to suggest the most accurate characters and words. For instance, when typing in Japanese, the IME can consider the entire sentence to provide correct kanji or hiragana suggestions. Currently, VS Code doesnโt provide this surrounding context, which limits the IME's predictive accuracy, often requiring manual corrections.
By supporting this feature:
- **Typing Accuracy Improves**: IMEs offer better suggestions based on context, reducing errors.
- **Enhanced Productivity**: Users can type faster with fewer interruptions.
- **Broader Accessibility**: VS Code becomes more user-friendly for non-Latin script languages.
Implementing this would likely involve integrating OS-level features, such as Windowsโ `IMR_DOCUMENTFEED`, to share document context with the IME. For more information on implementing this feature in Windows, you may find the following articles helpful:
- **[IMEใฎๅๅพๅ็
งๅคๆๆฉ่ฝใซๅฏพๅฟใใใซใฏ](https://topiyama.hatenadiary.org/entry/20070703/p1)**: This article provides an overview of how applications can support IME's contextual conversion by utilizing the `IMR_DOCUMENTFEED` notification code.
- **[TextBox ใ IME ๅๅพๅ็
งๅคๆใซๅฏพๅฟใใใ (System.Windows.Forms)](https://wwwcfe.hatenablog.com/entry/20100512/textbox_imr_documentfeed)**: This post discusses implementing IME contextual conversion in a Windows Forms TextBox control, including sample code and explanations. | feature-request,editor-input-IME | low | Critical |
2,651,058,867 | deno | refactor(ext/node): do not prefix private function with `_` in node compat JS implementation | In Node compat JS implementation, we generally follow Node.js source code as far as possible, but with `_` prefix for private functions in a file (ex. `afterConnect` in `lib/net.js` in Node.js becomes `_afterConnect` in our code base). This `_` prefix is confusing for comparing the 2 code bases. So we should remove them.
related: https://github.com/denoland/deno/pull/26661#discussion_r1837124451 | refactor | low | Minor |
2,651,068,850 | godot | Switching Editing and Preview keep reloading the Editor | ### Tested versions
The Godot editor keeps reloading whenever you preview the game, regardless of whether the project is simple or complex. This constant reloading is frustrating, as it forces you to wait for the editor to reload each time you switch between preview and editing, which slows down development. This issue did not occur in versions 4.1 and below; it was introduced in version 4.2 and persists in the latest versions. Additionally, the splash screen for Godot is not displaying properly.
### System information
Godot v4.4.dev3 - Android - Single-window, 1 monitor - OpenGL ES 3 (Compatibility) - PowerVR Rogue GE8300 - (4 threads)
### Issue description
https://github.com/user-attachments/assets/629d8589-82de-44cf-8cd7-22ff78fa1a47
The editor keep reloading during the preview and editing.
### Steps to reproduce
Create a project doesn't matter if it's simple or complex game the switch from preview and editing keep reloading the editor itself.
### Minimal reproduction project (MRP)
N/A | bug,platform:android,topic:editor | low | Major |
2,651,132,937 | godot | Large Font Size Pushes Bottom Tab Bar Off Screen | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
MacBook Pro 13 inch M1 2020
### Issue description
Opening the audio tab pushes the bottom tab bar off the bottom of the screen entirely.
<img width="1552" alt="Screenshot 2024-11-11 at 9 58 21โฏPM" src="https://github.com/user-attachments/assets/5e493ce2-b4c7-438f-b8be-7274c8e04ec3">
<img width="1552" alt="Screenshot 2024-11-11 at 9 58 38โฏPM" src="https://github.com/user-attachments/assets/e588f294-79ca-4aec-a601-4cd47f5bc716">
### Steps to reproduce
1. Go to Editor Settings and set Main Font Size to 24
2. Switch to the Script view
3. Create a new script
4. Click on the bottom Audio tab
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,651,136,227 | pytorch | FlexAttention gives me an INTERNAL_ASSERT_FAILED during mask_mod | ### ๐ Describe the bug
I am running some small, batch size 1 evaluations at inference, and get the following error with Flex Attention:
```Python
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 873, in create_block_mask
mask_tensor = create_mask(mask_mod, B, H, Q_LEN, KV_LEN, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 804, in create_mask
mask = mask_mod(b, h, m, n)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/inference/echo/models/models.py", line 82, in mask_mod
kv_masked = kv_idx < source_lengths[b]
~~~~~~~~~~~~~~^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 141, in __torch_function__
return mod_index(args[0], index_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/autograd/function.py", line 585, in apply
return custom_function_call(cls, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 49, in __call__
return super().__call__(autograd_function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 440, in __call__
return wrapper()
^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 721, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 436, in wrapper
return self.dispatch(
^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 305, in dispatch
return dispatch_functorch(self, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 130, in process
return kernel(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 300, in custom_function_call_vmap
return custom_function_call_vmap_generate_rule(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 376, in custom_function_call_vmap_generate_rule
output = custom_function_call(vmapped_function, *unwrapped_operands)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 49, in __call__
return super().__call__(autograd_function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 440, in __call__
return wrapper()
^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 721, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 436, in wrapper
return self.dispatch(
^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 305, in dispatch
return dispatch_functorch(self, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 130, in process
return kernel(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 300, in custom_function_call_vmap
return custom_function_call_vmap_generate_rule(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 376, in custom_function_call_vmap_generate_rule
output = custom_function_call(vmapped_function, *unwrapped_operands)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 49, in __call__
return super().__call__(autograd_function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 440, in __call__
return wrapper()
^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 721, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 436, in wrapper
return self.dispatch(
^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 305, in dispatch
return dispatch_functorch(self, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 130, in process
return kernel(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 300, in custom_function_call_vmap
return custom_function_call_vmap_generate_rule(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 376, in custom_function_call_vmap_generate_rule
output = custom_function_call(vmapped_function, *unwrapped_operands)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 49, in __call__
return super().__call__(autograd_function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 440, in __call__
return wrapper()
^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 721, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 436, in wrapper
return self.dispatch(
^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_ops.py", line 305, in dispatch
return dispatch_functorch(self, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 130, in process
return kernel(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 300, in custom_function_call_vmap
return custom_function_call_vmap_generate_rule(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/autograd_function.py", line 375, in custom_function_call_vmap_generate_rule
with interpreter.lower():
File "/opt/conda/envs/main-env/lib/python3.11/contextlib.py", line 158, in __exit__
self.gen.throw(typ, value, traceback)
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/pyfunctorch.py", line 89, in temporarily_pop_interpreter_stack
push_dynamic_layer_stack(saved)
RuntimeError: layerId == dynamic_layer.layerId() INTERNAL ASSERT FAILED at "../aten/src/ATen/functorch/DynamicLayer.cpp":236, please report a bug to PyTorch.
Exception ignored in: <generator object vmap_increment_nesting at 0x7239bcba9fc0>
Traceback (most recent call last):
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 469, in vmap_increment_nesting
_vmap_decrement_nesting()
RuntimeError: !dynamicLayerStack.empty() INTERNAL ASSERT FAILED at "../aten/src/ATen/functorch/DynamicLayer.cpp":217, please report a bug to PyTorch.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241106+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.10 | packaged by conda-forge | (main, Sep 30 2024, 18:08:57) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 26
On-line CPU(s) list: 0-25
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 26
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
CPU MHz: 2000.000
BogoMIPS: 4000.00
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 832 KiB
L1i cache: 832 KiB
L2 cache: 104 MiB
L3 cache: 416 MiB
NUMA node0 CPU(s): 0-25
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb sti
bp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdir
i movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytest-flake8==1.2.2
[pip3] pytorch-metric-learning==2.6.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241106+cu124
[pip3] torch-audiomentations==0.11.1
[pip3] torchdyn==1.0.6
[pip3] torcheval==0.0.7
[pip3] torchmetrics==1.4.2
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0.dev20241106+cu124
[pip3] triton==3.1.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ignite 0.5.0.post2 py_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libopenvino-pytorch-frontend 2024.3.0 he02047a_0 conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-lightning 2.4.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch-metric-learning 2.6.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241106+cu124 pypi_0 pypi
[conda] torch-audiomentations 0.11.1 pypi_0 pypi
[conda] torch-pitch-shift 1.2.5 pypi_0 pypi
[conda] torch-stoi 0.2.3 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241106+cu124 pypi_0 pypi
[conda] torchcde 0.2.5 pypi_0 pypi
[conda] torchcfm 1.0.6 pypi_0 pypi
[conda] torchdiffeq 0.2.2 pyhd8ed1ab_0 conda-forge
[conda] torchdyn 1.0.6 pypi_0 pypi
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchmetrics 1.4.2 pyhd8ed1ab_0 conda-forge
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241106+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @yanboliang @BoyuanFeng | needs reproduction,triaged,bug,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention,module: sdpa | low | Critical |
2,651,137,282 | go | x/tools/cmd/goimports: confusing error message when file contains CRLF | ### Go version
go version go1.23.3 windows/amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\runneradmin\AppData\Local\go-build
set GOENV=C:\Users\runneradmin\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\runneradmin\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\runneradmin\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=C:\hostedtoolcache\windows\go\1.23.3\x64
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\hostedtoolcache\windows\go\1.23.3\x64\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.3
set GODEBUG=
set GOTELEMETRY=local
set GOTELEMETRYDIR=C:\Users\runneradmin\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=D:\a\sflags\sflags\go.mod
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\RUNNER~1\AppData\Local\Temp\go-build1760277560=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
Run `golangci-lint` [which invokes](https://github.com/golangci/golangci-lint/issues/635#issuecomment-542293826) `goimports` on Windows machine.
https://github.com/urfave/sflags/actions/runs/11786526137/job/32829983012
Other reports are visible from https://github.com/golangci/golangci-lint/issues/580
### What did you see happen?
```
cmd\genvalues\main.go:1: File is not `goimports`-ed (goimports)
```
### What did you expect to see?
```
cmd\genvalues\main.go:1: File contains CRLF linefeeds (goimports)
``` | NeedsInvestigation,Tools | low | Critical |
2,651,155,333 | ollama | Embedded struct in `ToolFunction` | ### What is the issue?
https://github.com/ollama/ollama/blob/65973ceb6417c2e2796fa59bd3225bc7bd79b403/api/types.go#L165-L177
This makes creating tools really annoying.
```go
package main
import (
ollama "github.com/ollama/ollama/api"
)
var modFunctions = []ollama.Tool{{
Type: "function",
Function: ollama.ToolFunction{
Name: "remove",
Description: "Remove a post when it violates a rule",
Parameters: struct {
Type string `json:"type"`
Required []string `json:"required"`
Properties map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
} `json:"properties"`
}{
Type: "object",
Required: []string{"reason"},
Properties: map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
}{
"reason": {
Type: "string",
Description: "These are the rules of the subreddit. If the post violates one of these rules, remove it.",
Enum: []string{
"actual_animal_attack",
"bad_explanatory_comment",
"direct_link_to_other_subreddit",
"does_not_fit_the_subreddit",
"leopard_in_title_or_explanatory_comment",
"no_explanatory_comment",
"uncivil_behaviour",
},
},
},
},
},
}, {
Type: "function",
Function: ollama.ToolFunction{
Name: "approve",
Description: "Approve a post when the explanatory comment explains how someone is suffering consequences from something they voted for, supported or wanted to impose on other people",
Parameters: struct {
Type string `json:"type"`
Required []string `json:"required"`
Properties map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
} `json:"properties"`
}{
Type: "object",
Required: []string{"someone", "something", "consequences"},
Properties: map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
}{
"someone": {
Type: "string",
Description: "The name of the person who voted for, supported or wanted to impose something on other people.",
},
"something": {
Type: "string",
Description: "The thing that the person voted for, supported or wanted to impose on other people.",
},
"consequences": {
Type: "string",
Description: "The consequences of the thing that the person voted for, supported or wanted to impose on other people.",
},
},
},
},
}}
```
Please make a separate struct for parameters and properties :(
### OS
Linux
### GPU
Nvidia, AMD
### CPU
Intel, AMD
### Ollama version
Docker | bug | low | Minor |
2,651,183,325 | deno | Mutual TLS authentication for servers/listen | #6170 was closed because connectTls can be given a cert now, but there is still no way for listenTls (or upgraded listen with startTls) to get the client cert of the connection for authentication purposes. | suggestion,tls | low | Minor |
2,651,203,734 | pytorch | DISABLED test_comprehensive_complex_cuda_float64 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_complex_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32839476883).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_complex_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1395, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
^^^^^^^^
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float64], args=TensorList[Tensor[size=(), device="cuda:0", dtype=torch.float64]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_complex_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,651,203,835 | pytorch | DISABLED test_sparse_gradients_grad_is_view (__main__.DistributedDataParallelTest) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_gradients_grad_is_view&suite=DistributedDataParallelTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32835409615).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_gradients_grad_is_view`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 564, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 796, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 853, in _check_return_codes
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3965, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Scalars are not equal!
Expected -6 but got 0.
Absolute difference: 6
Relative difference: 1.0
Expect process 1 exit code to match Process 0 exit code of -6, but got 0
```
</details>
Test file path: `distributed/test_c10d_gloo.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 | oncall: distributed,module: flaky-tests,skipped | low | Critical |
2,651,245,526 | flutter | [flutter_svg] Incorrect behavior of BoxFit parameter | ### What package does this bug report belong to?
flutter_svg
### What target platforms are you seeing this bug on?
iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
api_repository:
dependency: "direct main"
description:
path: "packages/api_repository"
relative: true
source: path
version: "0.1.0"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: d2872f9c19731c2e5f10444b14686eb7cc85c76274bd6c16e1816bff9a3bab63
url: "https://pub.dev"
source: hosted
version: "2.12.0"
beacon_repository:
dependency: "direct main"
description:
path: "packages/beacon_repository"
relative: true
source: path
version: "0.1.0"
beacon_scanner:
dependency: "direct main"
description:
path: "packages/beacon_scanner"
relative: true
source: path
version: "0.1.0"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
dbus:
dependency: transitive
description:
name: dbus
sha256: "365c771ac3b0e58845f39ec6deebc76e3276aa9922b0cc60840712094d9047ac"
url: "https://pub.dev"
source: hosted
version: "0.7.10"
extensions:
dependency: "direct main"
description:
path: "packages/extensions"
relative: true
source: path
version: "0.1.0"
ffi:
dependency: transitive
description:
name: ffi
sha256: "16ed7b077ef01ad6170a3d0c57caa4a112a38d7a2ed5602e0aca9ca6f3d98da6"
url: "https://pub.dev"
source: hosted
version: "2.1.3"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "5398f14efa795ffb7a33e9b6a08798b26a180edac4ad7db3f231e40f82ce11e1"
url: "https://pub.dev"
source: hosted
version: "5.0.0"
flutter_local_notifications:
dependency: "direct main"
description:
name: flutter_local_notifications
sha256: ef41ae901e7529e52934feba19ed82827b11baa67336829564aeab3129460610
url: "https://pub.dev"
source: hosted
version: "18.0.1"
flutter_local_notifications_linux:
dependency: transitive
description:
name: flutter_local_notifications_linux
sha256: "8f685642876742c941b29c32030f6f4f6dacd0e4eaecb3efbb187d6a3812ca01"
url: "https://pub.dev"
source: hosted
version: "5.0.0"
flutter_local_notifications_platform_interface:
dependency: transitive
description:
name: flutter_local_notifications_platform_interface
sha256: "6c5b83c86bf819cdb177a9247a3722067dd8cc6313827ce7c77a4b238a26fd52"
url: "https://pub.dev"
source: hosted
version: "8.0.0"
flutter_localizations:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_secure_storage:
dependency: transitive
description:
name: flutter_secure_storage
sha256: "165164745e6afb5c0e3e3fcc72a012fb9e58496fb26ffb92cf22e16a821e85d0"
url: "https://pub.dev"
source: hosted
version: "9.2.2"
flutter_secure_storage_linux:
dependency: transitive
description:
name: flutter_secure_storage_linux
sha256: "4d91bfc23047422cbcd73ac684bc169859ee766482517c22172c86596bf1464b"
url: "https://pub.dev"
source: hosted
version: "1.2.1"
flutter_secure_storage_macos:
dependency: transitive
description:
name: flutter_secure_storage_macos
sha256: "1693ab11121a5f925bbea0be725abfcfbbcf36c1e29e571f84a0c0f436147a81"
url: "https://pub.dev"
source: hosted
version: "3.1.2"
flutter_secure_storage_platform_interface:
dependency: transitive
description:
name: flutter_secure_storage_platform_interface
sha256: cf91ad32ce5adef6fba4d736a542baca9daf3beac4db2d04be350b87f69ac4a8
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_secure_storage_web:
dependency: transitive
description:
name: flutter_secure_storage_web
sha256: f4ebff989b4f07b2656fb16b47852c0aab9fed9b4ec1c70103368337bc1886a9
url: "https://pub.dev"
source: hosted
version: "1.2.1"
flutter_secure_storage_windows:
dependency: transitive
description:
name: flutter_secure_storage_windows
sha256: b20b07cb5ed4ed74fc567b78a72936203f587eba460af1df11281c9326cd3709
url: "https://pub.dev"
source: hosted
version: "3.1.2"
flutter_svg:
dependency: "direct main"
description:
name: flutter_svg
sha256: "578bd8c508144fdaffd4f77b8ef2d8c523602275cd697cc3db284dbd762ef4ce"
url: "https://pub.dev"
source: hosted
version: "2.0.14"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
http:
dependency: transitive
description:
name: http
sha256: b9c29a161230ee03d3ccf545097fccd9b87a5264228c5d348202e0f0c28f9010
url: "https://pub.dev"
source: hosted
version: "1.2.2"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
intl:
dependency: "direct main"
description:
name: intl
sha256: d6f56758b7d3014a48af9701c085700aac781a92a87a62b1333b46d8879661cf
url: "https://pub.dev"
source: hosted
version: "0.19.0"
js:
dependency: transitive
description:
name: js
sha256: f2c445dce49627136094980615a031419f7f3eb393237e4ecd97ac15dea343f3
url: "https://pub.dev"
source: hosted
version: "0.6.7"
lints:
dependency: transitive
description:
name: lints
sha256: "3315600f3fb3b135be672bf4a178c55f274bebe368325ae18462c89ac1e3b413"
url: "https://pub.dev"
source: hosted
version: "5.0.0"
logging:
dependency: "direct main"
description:
name: logging
sha256: c8245ada5f1717ed44271ed1c26b8ce85ca3228fd2ffdb75468ab01979309d61
url: "https://pub.dev"
source: hosted
version: "1.3.0"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
package_info_plus:
dependency: "direct main"
description:
name: package_info_plus
sha256: da8d9ac8c4b1df253d1a328b7bf01ae77ef132833479ab40763334db13b91cce
url: "https://pub.dev"
source: hosted
version: "8.1.1"
package_info_plus_platform_interface:
dependency: transitive
description:
name: package_info_plus_platform_interface
sha256: ac1f4a4847f1ade8e6a87d1f39f5d7c67490738642e2542f559ec38c37489a66
url: "https://pub.dev"
source: hosted
version: "3.0.1"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
path_parsing:
dependency: transitive
description:
name: path_parsing
sha256: "883402936929eac138ee0a45da5b0f2c80f89913e6dc3bf77eb65b84b409c6ca"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
path_provider:
dependency: "direct main"
description:
name: path_provider
sha256: "50c5dd5b6e1aaf6fb3a78b33f6aa3afca52bf903a8a5298f53101fdaee55bbcd"
url: "https://pub.dev"
source: hosted
version: "2.1.5"
path_provider_android:
dependency: transitive
description:
name: path_provider_android
sha256: c464428172cb986b758c6d1724c603097febb8fb855aa265aeecc9280c294d4a
url: "https://pub.dev"
source: hosted
version: "2.2.12"
path_provider_foundation:
dependency: transitive
description:
name: path_provider_foundation
sha256: f234384a3fdd67f989b4d54a5d73ca2a6c422fa55ae694381ae0f4375cd1ea16
url: "https://pub.dev"
source: hosted
version: "2.4.0"
path_provider_linux:
dependency: transitive
description:
name: path_provider_linux
sha256: f7a1fe3a634fe7734c8d3f2766ad746ae2a2884abe22e241a8b301bf5cac3279
url: "https://pub.dev"
source: hosted
version: "2.2.1"
path_provider_platform_interface:
dependency: transitive
description:
name: path_provider_platform_interface
sha256: "88f5779f72ba699763fa3a3b06aa4bf6de76c8e5de842cf6f29e2e06476c2334"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
path_provider_windows:
dependency: transitive
description:
name: path_provider_windows
sha256: bd6f00dbd873bfb70d0761682da2b3a2c2fccc2b9e84c495821639601d81afe7
url: "https://pub.dev"
source: hosted
version: "2.3.0"
petitparser:
dependency: transitive
description:
name: petitparser
sha256: c15605cd28af66339f8eb6fbe0e541bfe2d1b72d5825efc6598f3e0a31b9ad27
url: "https://pub.dev"
source: hosted
version: "6.0.2"
platform:
dependency: transitive
description:
name: platform
sha256: "5d6b1b0036a5f331ebc77c850ebc8506cbc1e9416c27e59b439f917a902a4984"
url: "https://pub.dev"
source: hosted
version: "3.1.6"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
qr_scanner:
dependency: "direct main"
description:
path: "packages/qr_scanner"
relative: true
source: path
version: "0.1.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "0bd04f5bb74fcd6ff0606a888a30e917af9bd52820b178eaa464beb11dca84b6"
url: "https://pub.dev"
source: hosted
version: "1.4.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
timezone:
dependency: transitive
description:
name: timezone
sha256: ffc9d5f4d1193534ef051f9254063fa53d588609418c84299956c3db9383587d
url: "https://pub.dev"
source: hosted
version: "0.10.0"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_graphics:
dependency: transitive
description:
name: vector_graphics
sha256: "773c9522d66d523e1c7b25dfb95cc91c26a1e17b107039cfe147285e92de7878"
url: "https://pub.dev"
source: hosted
version: "1.1.14"
vector_graphics_codec:
dependency: transitive
description:
name: vector_graphics_codec
sha256: "2430b973a4ca3c4dbc9999b62b8c719a160100dcbae5c819bae0cacce32c9cdb"
url: "https://pub.dev"
source: hosted
version: "1.1.12"
vector_graphics_compiler:
dependency: transitive
description:
name: vector_graphics_compiler
sha256: ab9ff38fc771e9ee1139320adbe3d18a60327370c218c60752068ebee4b49ab1
url: "https://pub.dev"
source: hosted
version: "1.1.15"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
win32:
dependency: transitive
description:
name: win32
sha256: "84ba388638ed7a8cb3445a320c8273136ab2631cd5f2c57888335504ddab1bc2"
url: "https://pub.dev"
source: hosted
version: "5.8.0"
xdg_directories:
dependency: transitive
description:
name: xdg_directories
sha256: "7a3f37b05d989967cdddcbb571f1ea834867ae2faa29725fd085180e0883aa15"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
xml:
dependency: transitive
description:
name: xml
sha256: b015a8ad1c488f66851d762d3090a21c600e479dc75e68328c52774040cf9226
url: "https://pub.dev"
source: hosted
version: "6.5.0"
sdks:
dart: ">=3.5.1 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
1. Create a widget with `Column` or `Row`, which contains `Expanded` element with `svg` icon asset in it
2. Set asset `fit` property
### Expected results
I expect it to behave like `Image.asset(path, fit: fit)` would do
### Actual results
Behavior differs from expected
### Code sample
<details open><summary>Code sample</summary>
```dart
Column(
children: [
Expanded(
child: AssetManager.loadImage(AssetManager.COMPUTER),
),
Expanded(
child: Row(
children: [
Expanded(child: Image.asset(AssetManager.CAMERAPNG, fit: BoxFit.contain)),
Expanded(child: SvgPicture.asset(AssetManager.CAMERA, fit: BoxFit.contain)),
],
)),
],
),
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
On the left side is `png` image, on the right is `svg`:
*BoxFit.contain*

*BoxFit.fill*

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
๏
flutter doctor -v
[โ] Flutter (Channel stable, 3.24.4, on macOS 15.1 24B83 darwin-arm64, locale
ja-JP)
โข Flutter version 3.24.4 on channel stable at
/opt/homebrew/Caskroom/flutter/3.24.4/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (3 weeks ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version
35.0.0)
โข Android SDK at /Users/mock/Library/Android/sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build
21.0.3+-79915917-b509.11)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16B40
โข CocoaPods version 1.16.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build
21.0.3+-79915917-b509.11)
[โ] VS Code (version 1.95.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.100.0
[โ] Connected device (4 available)
โข iPhone (mobile) โข 00008110-000260A81111401E โข ios
โข iOS 18.1 22B83
โข macOS (desktop) โข macos โข
darwin-arm64 โข macOS 15.1 24B83 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin
โข macOS 15.1 24B83 darwin-arm64
โข Chrome (web) โข chrome โข
web-javascript โข Google Chrome 130.0.6723.117
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| package,team-ecosystem,has reproducible steps,P2,triaged-ecosystem,found in release: 3.24,found in release: 3.27,p: flutter_svg | low | Critical |
2,651,254,264 | vscode | deleted files don't go to trash |
Type: <b>Bug</b>
I delete files in the explorer using `del` key of keyboard, files are deleted and cannot be found in the trash bin
VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Linux x64 6.8.0-48-generic snap
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-1365U (12 x 3700)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: disabled_off<br>webnn: unavailable_software|
|Load (avg)|2, 2, 1|
|Memory (System)|31.00GB (22.61GB free)|
|Process Argv|--no-sandbox --force-user-env --crash-reporter-id 8447ec3e-1827-4960-bb59-12316efae9e7|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|wayland|
</details><details><summary>Extensions (14)</summary>
Extension|Author (truncated)|Version
---|---|---
gitlens|eam|15.6.3
ftp-simple|hum|0.7.6
git-graph|mhu|1.30.0
autopep8|ms-|2024.0.0
debugpy|ms-|2024.12.0
python|ms-|2024.18.1
vscode-pylance|ms-|2024.11.1
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.388.0
LiveServer|rit|5.7.9
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
j44ff735:31177056
```
</details>
<!-- generated by issue reporter --> | bug,linux,snap | low | Critical |
2,651,266,206 | godot | Cannot Play any Games Made with Godot | ### Tested versions
Latest versions of games released, both games made in Godot and published on Steam and one published on Itch.io and not updated since 2022 (Friday Night Funkin Benjine), all with the same result.
### System information
Windows 11 23h2 Core i9-13900HX 32gb RAM Nvidia Laptop 4080
### Issue description
I'm running a Lenovo Legion 7i:
Windows 11 Core i9-13900HX 32gb RAM Nvidia Laptop 4080
Windows 11 is version 23H2, and the nvidia driver is the most up to date one installed via nvcleanstall (did this as seemed to be having some memory problems with the latest driver, though not sure if it fully helped it or not)
I have tried running Cassette Beasts, Friday Night Funkin Benjine, Webfishing, and all of them do the exact same behavior. They start, they then immediately go into a "not responding" error state, and hang there forever.
I have tried to run these games in admin and compatibility mode, also ran in via cmd prompt to see if it throws any errors and it does not. It doesn't even get far enough into loading to leave anything in the webfishing log files.
I have also checked my nvidia drivers, considering I just updated them, and everything from antialiasing to 3d settings I have tried either "off" or "Let the application decide" and neither changed this behavior.
At first I just thought it was webfishing, but it appears that literally everything made in Godot does not play well with my PC right now for some reason, so any help would be greatly appreciated.
### Steps to reproduce
Boot the game
### Minimal reproduction project (MRP)
N/A, not developer | bug,platform:windows,topic:rendering,needs testing | medium | Critical |
2,651,327,935 | godot | [4.4.dev4] Inherited scenes doesn't update. | ### Tested versions
v4.4.dev4.official [36e6207bb]
### System information
Godot v4.4.dev4 - Android - Single-window, 1 monitor - OpenGL ES 3 (Compatibility) - Adreno (TM) 610 - (8 threads)
### Issue description
Inherited scene doesn't update after making any changes inside Inherited scene.
https://github.com/user-attachments/assets/6efff113-10a5-4bc5-8f91-06788affd5fc
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,651,368,562 | deno | Failed resolving types. [ERR_TYPES_NOT_FOUND] | Version: Deno 2.0.6
I used a lodash-es package when building a jsr plugin, but this package is full of js files, and an error was reported when publishing
``` ts
// lodash.ts
export * from "lodash-es";
```
``` ts
// main.ts
import { isArray } from "./lodash.ts";
export function test() {
console.log(isArray([]));
}
```
<img width="1207" alt="image" src="https://github.com/user-attachments/assets/26c82409-ceeb-43be-9a22-3852e09af2c2">
By the way, I set "nodeModulesDir": "manual" in deno.json, so the editor did not explode, but it would report an error as shown above;
But I set "nodeModulesDir": "auto", and the editor would report an error as shown below:
<img width="1075" alt="image" src="https://github.com/user-attachments/assets/8767b3db-d649-4530-bebf-27566b94377c">
But this is OK
<img width="1077" alt="image" src="https://github.com/user-attachments/assets/d384b9e7-4632-4bf9-aa88-22726d28de98">
I looked at lodash-es and its package.json does not define types. I wonder if this is the problem?

**Can deno support such problems in the future?** | needs investigation,types | low | Critical |
2,651,368,709 | ant-design | wrong position resize hundler in textarea | ### Reproduction link
[](https://codesandbox.io/s/antd-reproduction-template-forked-nnq4mn?file=/index.tsx)
### Steps to reproduce
1. need to set style={{resize: 'both'}} to TextArea with showCount (TextArea with prop showCount adding wrapper span)
2. change size of textarea with handler icon
### What is expected?
wrapper span set size also as textarea tag
### What is actually happening?
wrapper span not change self size
| Environment | Info |
| --- | --- |
| antd | undefined |
| React | 5.21.4 |
| System | Windows 10 x64 |
| Browser | Microsoft Edge 129.0.2792.79 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ Bug,help wanted,Inactive | low | Major |
2,651,388,286 | react | [Compiler Bug]: `eslint-plugin-react-compiler`: npm warn deprecated `@babel/plugin-proposal-private-methods` | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
`npm install eslint-plugin-react-compiler@beta --save-dev`
### Repro steps
1.
```sh
npm install eslint-plugin-react-compiler@beta --save-dev
```
Message: `npm warn deprecated @babel/plugin-proposal-private-methods@7.18.6: This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-private-methods instead.`
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
N/A | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,651,395,059 | excalidraw | Keep original ratio when updating width/height of multiple elements through properties panel | In my work I need to use too many images and need to grid them to make them read easylier and look beautiful. Now I chang them size by Shape Properties one by one, beacuse if I change them size by Properties at same time, images only change Hight or Width and borken their aspect ratio.
I want to type the width size, and only specify the width, hight change by auto., and keep their aspect ratio Hight size is same.
Thanks a lot. | enhancement,UX/UI | low | Minor |
2,651,427,994 | excalidraw | FR: Easy to grid selected images | ### Need for Image Organization
In my work, I often handle large volumes of images that need to be arranged in grids for clarity and visual appeal. An efficient way to organize images in a grid would save time and enhance readability.
### Solution Found in Miro
In Miro, an online whiteboard, I discovered a helpful feature: after selecting multiple images, a control button appears on the right side of the selection box. Dragging this button allows quick grid adjustments, creating a clean, organized layout.
### Request for Excalidraw
However, I rely on Excalidraw for most of my work. Adding a similar grid feature to Excalidraw would greatly improve its usability, allowing for professional, organized layouts with ease. | enhancement | low | Minor |
2,651,476,994 | ollama | Role field should not be repeated in streamed response chunks | ### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API repeats `"role": "assistant"` in all returned chunks. This is different to OpenAI's API which just has this in the first chunk. This breaks compatibility with the `client.beta.chat.completions.stream` helper from the openai package. See also this issue https://github.com/pydantic/logfire/pull/545#discussion_r1837660027. Ollama should omit the "role" field or return `"role": None` for all chunks after the first one.
---
OpenAI chunks: "role" only in first chunk
```python
from openai import OpenAI
client = Client()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Just say: The answer is secret."}],
stream=True,
)
for chunk in response:
print(chunk.model_dump_json(exclude_none=True))
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{"content":"","role":"assistant"},"index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{"content":"The"},"index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{"content":" answer"},"index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{"content":" is"},"index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{"content":" secret"},"index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{"content":"."},"index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
{"id":"chatcmpl-ASgaImINIA8gwsca92CCgES2VldF8","choices":[{"delta":{},"finish_reason":"stop","index":0}],"created":1731400242,"model":"gpt-4-0613","object":"chat.completion.chunk"}
```
Ollama chunks: "role" provided in every chunk
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama",
)
response = client.chat.completions.create(
model="llama3.1",
messages=[{"role": "user", "content": "Just say: The answer is secret."}],
stream=True,
# stream_options={"include_usage": True},
# max_tokens=1,
)
for chunk in response:
print(chunk.model_dump_json(exclude_none=True))
{"id":"chatcmpl-230","choices":[{"delta":{"content":"The","role":"assistant"},"index":0}],"created":1731400290,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
{"id":"chatcmpl-230","choices":[{"delta":{"content":" answer","role":"assistant"},"index":0}],"created":1731400290,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
{"id":"chatcmpl-230","choices":[{"delta":{"content":" is","role":"assistant"},"index":0}],"created":1731400290,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
{"id":"chatcmpl-230","choices":[{"delta":{"content":" secret","role":"assistant"},"index":0}],"created":1731400290,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
{"id":"chatcmpl-230","choices":[{"delta":{"content":".","role":"assistant"},"index":0}],"created":1731400290,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
{"id":"chatcmpl-230","choices":[{"delta":{"content":"","role":"assistant"},"finish_reason":"stop","index":0}],"created":1731400290,"model":"llama3.1","object":"chat.completion.chunk","system_fingerprint":"fp_ollama"}
```
Using `client.beta.chat.completions.stream` with ollama results in `"role": "assistantassistant...`.
openai docs: https://github.com/openai/openai-python/blob/646a579cdb305a9d3fba6c5f9a96011c5e2c2882/helpers.md#chat-completions-api
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama",
)
with client.beta.chat.completions.stream(
model="llama3.1",
messages=[{"role": "user", "content": "Just say: The answer is secret."}],
) as stream:
for event in stream:
pass
print(stream.get_final_completion().model_dump_json(indent=2))
{
"id": "chatcmpl-653",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "The answer is secret.",
"refusal": null,
"role": "assistantassistantassistantassistantassistantassistant",
"audio": null,
"function_call": null,
"tool_calls": [],
"parsed": null
}
}
],
"created": 1731400312,
"model": "llama3.1",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_ollama",
"usage": null
}
```
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.4.1 | bug,api | low | Minor |
2,651,524,537 | react | [Compiler Bug]: Cannot alter scroll values on a element - Mutating a value returned from 'useState' | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAMygOzgFwJYSYAEAggBTBFhwwQA2dAChGEQL4CURwAOsUXEJgcRANoI6AGioIcAUToBdIgF4iAJQQBDXADooYBAGUcWnAjIcA3Hz5EiBhABktAT2jy0aBLjKXVAHzcRHb2RBK61LQMTghoImpR9EwsNvxs0uJSVDTJzGCKHLb8MLKwxAA8ACZ4AG5EpWgqwIbydGwBFQD0NbUBaWwgbEA
### Repro steps
Sometimes we need to "mutate" a html element, e.g. when dealing with scrolling.
But with this code we get an error:
> Mutating a value returned from 'useState()', which should not be mutated. Use the setter function to update instead
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
19.0.0-beta-a7bf2bd-20241110 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,651,537,323 | material-ui | Support passing theme into `css()` (or its wrapper) from `'@emotion/react'` | ### Summary
According to the documentation ([one](https://mui.com/material-ui/migration/migrating-from-jss/#migrating-from-jss-to-emotion), [two](https://mui.com/material-ui/integrations/interoperability/#the-css-prop), [three](https://emotion.sh/docs/typescript#emotionreact)), passing the `theme` is currently only supported when using `styled()` from `'@mui/material/styles'` (which, to my understanding, is a wrapper over `styled()` from `'@emotion/styled'`). However, there's no similar wrapper for `css()` that would allow passing the theme as well. Adding this wrapper would simplify code in some cases, removing the need to create additional wrappers around React components.
### Examples
Note: in the following example `sx={{m: 1}}` would suffice, but imagine you need the classname or `SerializedStyles` here
**Current**
```jsx
const StyledBox = styled(Box)(({ theme }) => ({
margins: theme.spacing(1)
}))
function MyComponent() {
return <StyledBox/>
}
```
**Proposed**
```jsx
function MyComponent() {
return (
<Box
css={css(({ theme }) => ({
margin: theme.spacing(1)
}))}
/>
)
}
```
### Motivation
Using `css()` is sometimes preferred because it eliminates the need for an additional identifier (e.g., `StyledBox`) in the search scope. The inability to pass the `theme` limits its usage to cases where the `theme` is not needed, which can be surprising since libraries like tss-react, JSS, and Pigment allow passing the `theme` to similar utilities. Supporting this functionality for Emotion's `css()` would close this gap.
**Search keywords**: css, emotion, theme | new feature,package: system,customization: css | low | Minor |
2,651,571,524 | pytorch | Differentiable quadratic programing solver | ### ๐ The feature, motivation and pitch
Many libraries, such as `qpsolvers` provide solvers for solving quadratic programming using various methods from the literature.
A [quadratic programming problem](https://en.wikipedia.org/wiki/Quadratic_programming) consists of minimizing a quadratic form $\frac{1}{2}(x-a)^\top Q (x-a)$ under some linear inequality constraints $Ax\leq b$ ($Q$ is positive definite and symmetric).
Once the problem is solved, it is possible to use the [KKT conditions](https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions) (and the lagrangian multipliers) to find the gradient of the solution with respect to all the parameters of the problem.
If $\mu$ are the Lagrangian multipliers, the Lagrangian of the problem is given by $\frac{1}{2}(x-a)^\top A (x-a) + \mu^\top (Ax-b)$, then the KKT conditions give $Q(x-a)+A^\top \mu=0$ and $\mu^\top(Ax-b)=0$.
These are known to be satisfied at the solution, and we can differentiate those equations appropriately to provide a differential of $x$ with respect to $Q$, $a$, $A$, and $b$.
In my opinion, it would be very useful if Pytorch could provide such a solver that provides a differentiation graph.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | module: autograd,triaged,needs research | low | Minor |
2,651,571,598 | electron | Electron 32/33 `Intl` API returns `Etc/Unknown` time zone | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.2.3
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 Home Single Language (Build 22631.4391)
### What arch are you using?
x64
### Last Known Working Electron version
30.5.1
### Expected Behavior
`Intl.DateTimeFormat().resolvedOptions().timeZone` should return "**Africa/Johannesburg**"
### Actual Behavior
`Intl.DateTimeFormat().resolvedOptions().timeZone` returns "**Etc/Unknown**"
### Testcase Gist URL
_No response_
### Additional Information
It just happens on Windows 11 Home Single Language (Build 22631.4391). On Window 11 Home/Pro works fine.
It is working fine on Electron 30.x, as soon as I update to 32 or 33 its breaks. There must be a regression.
You can test by opening dev tools and typing `Intl.DateTimeFormat().resolvedOptions().timeZone`.
| platform/windows,bug :beetle:,status/reviewed,32-x-y | low | Critical |
2,651,573,616 | next.js | Truly dynamic imports with Turbopack, i.e. support for Monaco editor | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Enterprise
Available memory (MB): 65225
Available CPU cores: 32
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 15.0.4-canary.6 // Latest available version is detected (15.0.4-canary.6).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.4.5
Next.js Config:
output: export
```
### Which example does this report relate to?
_No response_
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
Trying to get the Monaco editor to work with Turbopack.
```
./node_modules/monaco-editor/esm/vs/editor/common/services/editorSimpleWorker.js:323:17
Module not found
321 | else {
322 | const url = FileAccess.asBrowserUri(`${moduleId}.js`).toString(true);
> 323 | import(`${url}`).then(onModuleCallback).catch(reject);
| ^^^^^^^^^^^^^^^^
324 | }
325 | });
326 | }
```
### Expected Behavior
I don't know if it's trying to import an empty string or what it is doing but it is failing when I do a dynamic import for the monaco editor api module. Like so,
```
const useMonaco = () => {
const [monaco, setMonaco] = useState<typeof import("monaco-editor/esm/vs/editor/editor.api")>()
useEffect(() => {
import("monaco-editor/esm/vs/editor/editor.api").then(setMonaco)
}, [])
return monaco
}
```
It's probably failing because it cannot resolve this statically but it's not supposed to either, I don't know how debug, troubleshoot or investigate this myself further. All I want Turbopack to do is to leave this dynamic import alone. Furthermore, this is a library so I cannot modify the call site. I just want Turbopack to not complain about this, I know what I'm doing.
### To Reproduce
Adapt this sample from Monaco for Turbopack
https://github.com/microsoft/monaco-editor/tree/main/samples/browser-esm-webpack-small | Turbopack | low | Critical |
2,651,622,799 | pytorch | DISABLED test_comprehensive_nn_functional_glu_cuda_float16 (__main__.TestInductorOpInfoCUDA) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_glu_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32845155069).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nn_functional_glu_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1152, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2199, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1229, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1528, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1193, in test_comprehensive
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1153, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 613, in check_model_gpu
check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 564, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 351, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1707, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2068, in _backward_impl
out = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 135, in call_func_at_runtime_with_args
out = normalize_as_list(f(*args))
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3057, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1592, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1164, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 27: SampleInput(input=Tensor[size=(3, 6, 8), device="cuda:0", dtype=torch.float16], args=(2), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=27 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nn_functional_glu_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,651,684,301 | electron | Icon missing in OS Desktop Notification when packaged in Electron 32.2.1 | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.2.1
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10/11
### What arch are you using?
x64
### Last Known Working Electron version
21.4.4
### Expected Behavior
The notification should display the icon as specified in the path when the app is packaged and running.
### Actual Behavior
I am encountering an issue in Electron version 32.2.1 related to desktop notifications on Windows. The icon displays correctly when running the application in development mode, but it fails to appear in the packaged .exe version.
**Steps to Reproduce:**
- Create a desktop notification in the Electron app that uses an .ico file for the icon.
- Run the app in development mode โ the icon displays correctly in the notification.
- Package the app (using Electron Packager or equivalent) and run the generated .exe.
- Observe the notification โ it appears, but the icon is missing.
Notification in development:

Notification once packaged:

### Testcase Gist URL
_No response_
### Additional Information
OS: Windows
Electron version: 32.2.1
Icon format: .ico
The issue is specific to the packaged version; development mode works as expected.
Would appreciate any help or insights into this issue. | platform/windows,bug :beetle:,has-repro-comment,32-x-y | low | Critical |
2,651,700,209 | opencv | Potential API changes in OpenCV 5 | Let's note potential API changes in OpenCV 5 here | feature | low | Minor |
2,651,728,722 | go | x/tools/gopls: -remote causes the first run with Helix to not work for a few seconds | #### What did you do?
Running gopls with https://helix-editor.com/ and with `-remote=auto`, I open a Go file inside a Go module for the first time without gopls already running in remote mode, and ask gopls a question very quickly, such as go-to-definition or the workspace symbol picker.
#### What did you expect to see?
It should work, even if it takes a few seconds to give an answer due to the LSP loading the module.
#### What did you see instead?
An error from Helix:
> No configured language server supports workspace symbols
This error seems to come from the editor, so it may not be a gopls bug. However, if I remove `-remote=auto` as suggested by @alandonovan, then the bug disappears - I can kill gopls, open a file and very quickly list all workspace symbols, and the UI pops up immediately without a problem, even if it takes a couple of seconds for the symbols to show up.
Presumably an editor like Helix starting the LSP causes a handshake to happen for the editor to know which features the LSP supports, so I wonder if this doesn't work too well when `-remote=auto` is used, causing the editor to think a Go LSP is not supported or present while it is still loading.
#### Build info
```
golang.org/x/tools/gopls v0.0.0-20241111214603-06a498a7a312
golang.org/x/tools/gopls@v0.0.0-20241111214603-06a498a7a312 h1:7KWJGDbH0ZlvryN3TlO+JzxJWieJHrk0mPS20KRc8uE=
github.com/BurntSushi/toml@v1.4.1-0.20240526193622-a339e1f7089c h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs=
github.com/google/go-cmp@v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/typeparams@v0.0.0-20231108232855-2478ac86f678 h1:1P7xPZEwZMoBoz0Yze5Nx2/4pxj6nw9ZqHWXqP0iRgQ=
golang.org/x/mod@v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/sync@v0.9.0 h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ=
golang.org/x/telemetry@v0.0.0-20241106142447-58a1122356f5 h1:TCDqnvbBsFapViksHcHySl/sW4+rTGNIAoJJesHRuMM=
golang.org/x/text@v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=
golang.org/x/tools@v0.27.1-0.20241111214603-06a498a7a312 h1:8k/Q1o+SUyt5050kQIaxFUZdu3rDC6XHaiASBndZKn0=
golang.org/x/vuln@v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/tools@v0.5.1 h1:4bH5o3b5ZULQ4UrBmP+63W9r7qIkqJClEA9ko5YKx+I=
mvdan.cc/gofumpt@v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU=
mvdan.cc/xurls/v2@v2.5.0 h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: devel go1.24-c96939fbed 2024-11-12 01:08:33 +0000
```
| gopls,Tools | low | Critical |
2,651,733,886 | rust | Add support for host tools on the `arm64e-apple-darwin` target | **Known Issues**:
- [ ] https://github.com/rust-lang/rust/issues/131884: Integrate `jemallocator` for the `arm64e-apple-darwin` target.
Blocked by https://github.com/tikv/jemallocator/issues/102
**Definition of Done**:
- [ ] The `arm64e-apple-darwin` target meets the requirements of the [Tier 1 Target Policy](https://doc.rust-lang.org/nightly/rustc/target-tier-policy.html#tier-1-target-policy).
See https://github.com/rust-lang/rust/issues/73628
| O-Arm,C-tracking-issue,O-apple | low | Minor |
2,651,775,616 | go | net/http: TestTransportDiscardsUnneededConns/h2 failures | ```
#!watchflakes
default <- pkg == "net/http" && test == "TestTransportDiscardsUnneededConns/h2"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731850052493175777)):
=== RUN TestTransportDiscardsUnneededConns/h2
=== PAUSE TestTransportDiscardsUnneededConns/h2
=== CONT TestTransportDiscardsUnneededConns/h2
2024/11/08 14:38:34 Error enabling Transport HTTP/2 support: protocol https already registered
clientserver_test.go:1153: 10 connections opened, 8 closed; want 9 to close
2024/11/08 14:38:45 Error enabling Transport HTTP/2 support: protocol https already registered
clientserver_test.go:211: server log: http: TLS handshake error from 127.0.0.1:59261: read tcp 127.0.0.1:59268->127.0.0.1:59261: use of closed network connection
--- FAIL: TestTransportDiscardsUnneededConns/h2 (10.64s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,651,798,023 | vscode | CLI development set up is not helpful | CLI development set up is not helpful - It is not clear what tools to install and build the project and how to debug the project.
https://github.com/microsoft/vscode/blob/main/cli/CONTRIBUTING.md | debt | low | Critical |
2,651,843,925 | yt-dlp | [Niconico] Comment count and real amount of comments downloaded do not match | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
In this example (https://www.nicovideo.jp/watch/sm11188599), the amount of comments shown both on the website and when I ran yt-dlp with `--print comment_count` were 6421, but the after downloading with `--write-sub --add-header accept-language:ja` there were only 607 entries listed in the comment.json file
[ใๅๆใซPVใใใฟใซใใใใญใ็ฅGUMI่ช๏ผ1ๅจๅนดใ [sm11188599].comments.json](https://github.com/user-attachments/files/17715488/PV.GUMI.1.sm11188599.comments.json)
Why is this happening and is yt-dlp capable of fixing it?
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-P', 'E:\\DIVINE REALM\\KIEN\\Video\\NND', '-vU', '--write-subs', '--no-download', '--add-header', 'accept-language:ja', 'https://www.nicovideo.jp/watch/sm11188599']
[debug] Encodings: locale cp932, fs utf-8, pref cp932, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.11.11.232805 from yt-dlp/yt-dlp-nightly-builds [a9f85670d] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19041-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.0.1-full_build-www.gyan.dev (setts), ffprobe 7.0.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.11.11.232805 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.11.11.232805 from yt-dlp/yt-dlp-nightly-builds)
[niconico] Extracting URL: https://www.nicovideo.jp/watch/sm11188599
[niconico] sm11188599: Downloading webpage
[niconico] sm11188599: Downloading JSON metadata
[niconico] sm11188599: Downloading m3u8 information
[niconico] sm11188599: Downloading comments
[info] sm11188599: Downloading subtitles: comments
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] sm11188599: Downloading 1 format(s): video-302+audio-aac-128kbps
Deleting existing file E:\DIVINE REALM\KIEN\Video\NND\ใๅๆใซPVใใใฟใซใใใใญใ็ฅGUMI่ช๏ผ1ๅจๅนดใ [sm11188599].comments.json
[info] Writing video subtitles to: E:\DIVINE REALM\KIEN\Video\NND\ใๅๆใซPVใใใฟใซใใใใญใ็ฅGUMI่ช๏ผ1ๅจๅนดใ [sm11188599].comments.json
```
| site-bug,triage | low | Critical |
2,651,864,429 | tailwindcss | [v4] Usage of `@tailwindcss/postcss^4.0.0-alpha.25` with `svelte-preprocess` for svelte file with style tag: problematic dependency scanning | ## Version
`@tailwindcss/postcss^4.0.0-alpha.25` & `tailwindcss^4.0.0-alpha.25`, i.e all versions of v4 alpha from **25** to **33** (as of this writing).
## Environment
Output of `npx envinfo --system --binaries --browsers --npmPackages "{svelte,@sveltejs/*,vite,tailwindcss,@tailwindcss/postcss}"`:
```
System:
OS: Linux 6.6 Arch Linux
CPU: (4) x64 Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
Memory: 6.01 GB / 15.57 GB
Container: Yes
Shell: 3.7.1 - /usr/bin/fish
Binaries:
Node: 22.11.0 - ~/.volta/tools/image/node/22.11.0/bin/node
npm: 10.9.0 - ~/.volta/tools/image/node/22.11.0/bin/npm
pnpm: 9.12.3 - ~/.volta/bin/pnpm
Browsers:
Chromium: 130.0.6723.116
npmPackages:
@sveltejs/adapter-auto: ^3.0.0 => 3.3.1
@sveltejs/kit: ^2.0.0 => 2.8.0
@sveltejs/vite-plugin-svelte: ^4.0.0 => 4.0.0
@tailwindcss/postcss: 4.0.0-alpha.25 => 4.0.0-alpha.25
svelte: ^5.0.0 => 5.1.15
tailwindcss: 4.0.0-alpha.25 => 4.0.0-alpha.25
vite: ^5.0.3 => 5.4.11
```
## Reproduction
https://github.com/vnphanquang/sveltekit-tailwind-4-reproduction. Please see README for steps.
## Description
> [!NOTE]
> Please note that tailwind is being used as a `postcss` plugin here, reported issue is not observed when using `vite` plugin.
In the reproduction, any `svelte` file with a `style` tag will cause the following two warnings from `svelte-preprocess` during both `dev` and `build`:
```
[vite-plugin-svelte] [...truncated_path...]/*.svelte svelte.preprocess returned this file as a dependency of itself. This can be caused by an invalid configuration or importing generated code that depends on .svelte files (eg. tailwind base css)
```
There have been instances where similar warning was observed: https://github.com/sveltejs/svelte-preprocess/issues/619, https://github.com/sveltejs/svelte-preprocess/issues/346. But I don't exactly know what is causing this now. I'm guessing parsing `style` tags calls for parsing Tailwind entry css, which in turns requires parsing all matching source files ๐.
---
```
[...truncated_path...]/*.svelte svelte.preprocess depends on more than 10 external files which can cause slow builds and poor DX, try to reduce them. Found: [list_of_all_source_files_except_raster_images]
```
Similarly, this is causing a lot of parsing during dev. In our real project with lots of such svelte files, the server quickly crashed with `Maximum Call Stack Size Exceeded`.
## Use case
Hi all thanks for the good work, Tailwind 4 has been really good and I'm trying to test it out on some existing v3 projects with relatively complex setup. In our case we use tailwind as a postcss plugin because we also rely on some other postcss plugins (mixins, custom properties, custom media queries). We tried to use tailwind as vite plugin which works well in dev but the plugins do not apply during production build. | v4 | low | Critical |
2,651,887,009 | godot | AnimationPlayer Inspector never displays correct keyframe data for sub-property keyframe tracks (e.g. position:x, scale:y, modulate:a, etc.) | ### Tested versions
Reproducible in:
- **v4.4.dev5.official [9e6098432]**
- v4.4.dev5.mono.official [9e6098432]
- v4.4.dev4.mono.official [36e6207bb]
- v4.4.dev4.official [36e6207bb]
- **v4.4.dev3.official [f4af8201b]**
Not Reproducible in:
- v4.4.dev2.official [97ef3c837]
- v4.4.dev1.official [28a72fa43]
- v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.4.dev5 - Windows 10.0.19045 - Multi-window, 3 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 5800X 8-Core Processor (16 threads)
### Issue description
## Expected
When setting a keyframe for a sub-property, e.g. `position:x`, its type should be `float`.
**(v4.4.dev2.official [97ef3c837])**

## Problem
The Inspector acts as though the keyframe is still `Vector2`.
This occurs regardless of whether I selected `position:x` through the 'Add Track' menu, or simply edited an existing `position` track name to include `:x`.
This problem seems to affect **all vector-type properties** (e.g. `scale`, `modulate`, etc.)
**(v4.4.dev3.official [f4af8201b])**


## Workaround
If you set a `position:x` track, drag the object to a different position in the editor, then set a new keyframe, the animation *will* play without issue. The Inspector is merely **showing** the wrong data; the functionality seems to be unaffected.
### Steps to reproduce
1. Open MRP
2. Examine the two keyframes of the AnimationPlayer in the Inspector.
3. Confirm that the Inspector displays blank `Vector2` data, instead of the correct type: `float`
(The values of the keyframes in the MRP **should** instead be: **419.0**, and **500.0**, respectively.)
### Minimal reproduction project (MRP)
[animation-player-test.zip](https://github.com/user-attachments/files/17715578/animation-player-test.zip)
| bug,topic:editor,topic:animation | low | Minor |
2,651,980,312 | vscode | Enter inserts // | 
```ts
const obsWorkspace = new FakedObservableWorkspace();
obsWorkspace.addDocument({
uri: createDocumentUri(URI.parse(`file://test.ts`).toString()), initialValue: '' });
const originalTextDoc = new StringValue(
);
const singleEdit = SingleEdit.insert(11, '3D');
``` | bug,polish,editor-autoindent | low | Major |
2,651,981,069 | vscode | Highlight all found text in DEBUG console | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
In the DEBUG console, when searching for text, the messages containing the matches are shown, but the text itself is not highlighted, which makes it hard to find in larger log messages. I'd like to request the text to be highlighted.

| feature-request,debug,debug-console | medium | Critical |
2,651,981,746 | flutter | [go_router] GoRouterState.error is only populated for errorBuilder when the route is not found | ### Steps to reproduce
The `GoRouterState` object has an `error` property which I would expect to hold a `GoException` when I open a non-existing route. Currently this exception is only available in `errorBuilder` and `errorPageBuilder`, but always `null` in `onException` and `redirect`.
My use case would be: I try to protect most of the screens with authentication by the `redirect` callback, but I can't make error screens unprotected as `state.error` is always `null` and nothing else indicates that I am on an error screen. Only `buildError` receives the error, but that is too late as `redirect` will redirect back the user before that. This can be easily triggered by deep links sadly.
To reproduce please run my attached code sample. As `onException` and `errorBuilder` are mutually exclusive, one of them is commented out.
I'm using `go_router`: `14.4.1` with flutter `3.24.4` on Android (but I don't think it is platform dependant, however I don't know and care about web or any non-mobile).
### Expected results
The `GoRouterState`'s `error` property should be non-null when the matched location is not registered among the possible routes. In the example code all errors should be like the following after tapping the `Open a non-existing route` button:
> /404: GoException: no routes for lcoation: /404
### Actual results
The `GoRouterState`'s `error` property is `null` when opening a non-existing route. In the example app all errors are like this:
> /404: No error
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() => runApp(const MyApp());
final _errorNotifier = ValueNotifier<String>('');
final _redirectErrorNotififer = ValueNotifier<String>('');
extension on GoRouterState {
String get errorMsg => '$matchedLocation: ${error ?? 'No error'}';
}
final _router = GoRouter(
routes: <RouteBase>[
GoRoute(
path: '/',
builder: (context, state) => const HomeScreen(),
),
],
redirect: (context, state) {
_redirectErrorNotififer.value = state.errorMsg;
return null;
},
onException: (context, state, router) =>
_errorNotifier.value = state.errorMsg,
// errorBuilder: (context, state) => HomeScreen(error: state.errorMsg),
);
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: _router,
);
}
}
class HomeScreen extends StatefulWidget {
final String? error;
const HomeScreen({super.key, this.error});
@override
State<HomeScreen> createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
String? _goRouterOnExceptionError;
String? _goRouterRedirectError;
late final _commonErrorLabel;
@override
void initState() {
super.initState();
_goRouterOnExceptionError = widget.error;
_commonErrorLabel = widget.error != null ? 'buildError' : 'onException';
_errorNotifier.addListener(_onExceptionErrorListener);
_redirectErrorNotififer.addListener(_redirectErrorListener);
}
@override
void dispose() {
_errorNotifier.removeListener(_onExceptionErrorListener);
_redirectErrorNotififer.removeListener(_redirectErrorListener);
super.dispose();
}
void _onExceptionErrorListener() {
setState(() {
_goRouterOnExceptionError = _errorNotifier.value;
});
}
void _redirectErrorListener() {
setState(() {
_goRouterRedirectError = _redirectErrorNotififer.value;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Home Screen')),
body: SizedBox.expand(
child: Column(
crossAxisAlignment: CrossAxisAlignment.center,
mainAxisAlignment: MainAxisAlignment.center,
children: [
..._errorMessage(
context, _commonErrorLabel, _goRouterOnExceptionError),
..._errorMessage(context, 'redirect', _goRouterRedirectError),
ElevatedButton(
onPressed: () => context.push('/404'),
child: const Text('Open a non-existing route'),
),
],
),
),
);
}
List<Widget> _errorMessage(
BuildContext context,
String label,
String? message,
) {
if (message == null) {
return const <Widget>[];
}
final theme = Theme.of(context);
final colorScheme = theme.colorScheme;
return [
Text(label, style: theme.textTheme.labelLarge),
ColoredBox(
color: colorScheme.error,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Text(
message,
style: TextStyle(color: colorScheme.onError),
),
),
),
const SizedBox(height: 15.0),
];
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.4, on macOS 15.1 24B83 darwin-arm64, locale en-HU)
โข Flutter version 3.24.4 on channel stable at /Users/macbookpro/devtools/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (3 weeks ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/macbookpro/devtools/android_sdk
โข Platform android-34, build-tools 34.0.0
โข ANDROID_SDK_ROOT = /Users/macbookpro/devtools/android_sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16B40
โข CocoaPods version 1.16.1
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[โ] VS Code (version 1.95.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.100.0
[โ] Connected device (4 available)
โข SM S906U1 (mobile) โข RFCT20EQM0M โข android-arm64 โข Android 14 (API 34)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 15.1 24B83 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 15.1 24B83 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 130.0.6723.117
! Error: Browsing on the local area network for Lรกszlรณโs iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Critical |
2,652,183,416 | pytorch | DistributedDataParallel with compile(..., mode="max-autotune") hangs in 2.5+ | ### ๐ Describe the bug
I have a DCNN-like network that I'm training with DDP and 2 GPUs. This always got a speedup from ```compile(net, mode="max-autotune")```. However, since PyTorch 2.5+ and still in current Nightly, the compile step hangs forever. In ```nvtop``` I can see memory usage go up, slight burst of GPU activity, then memory usage resets back to baseline. This repeats forever. Compilation works correct if I disable ```"max-autotune"```.
Patching to make ```def is_big_gpu(index)``` always ```return True``` - a bugfix which is needed to get both GPUs in the machine to tune, does not change the outcome or behavior, i.e. both the original version and the patched one will hang.
I need to patch PyTorch with #136332 to compile the network.
I tried ```env TORCH_COMPILE_DEBUG=1``` but most of the output that this produced seems to be empty files.
Please let me know what kind of debug output would be needed to investigate this. This is a regression compared to PyTorch 2.3 and 2.4.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4070 SUPER
GPU 1: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 3900X 12-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4672.0698
CPU min MHz: 2200.0000
BogoMIPS: 7585.24
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch_optimizer==3.2.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mcarilli @eellison @penguinwu @chauhang | high priority,oncall: distributed,triaged,module: cuda graphs,oncall: pt2,pt2d-triage-nov2024 | low | Critical |
2,652,184,590 | pytorch | Optional ```clamp_k``` Argument for Handling Out-of-Range Values in ```torch.topk``` | ### ๐ The feature, motivation and pitch
Currently, the ```torch.topk``` function raises an error when the specified ```k``` value exceeds the total number of elements in the input tensor.
### Proposed Solution
Introduce an optional argument, ```clamp_k```, which adjusts ```k``` to the minimum of the specified ```k``` and the actual number of elements. This would prevent runtime errors and offer greater flexibility in usage. The default value would be set to ```clamp_k=False```, preserving the current behaviour of ```torch.topk```.
### Example
```
>>> torch.tensor([1., 2., 3.]).topk(5, clamp_k=True)
```
#### Current output:
```
RuntimeError: selected index k out of range
```
#### Requested output (as if ```k=3```):
```
torch.return_types.topk(
values=tensor([3., 2., 1.]),
indices=tensor([2, 1, 0]))
```
| triaged,enhancement,has workaround,module: sorting and selection | low | Critical |
2,652,210,577 | PowerToys | Multi Monitor Enhancement | ### Description of the new feature / enhancement
A product similar toโ https://github.com/mgth/LittleBigMouse โIt is suggested to add a tool that not only supports the movement trajectory of the mouse according to the actual physical size, but also can unify the size of the Win window when setting different DPI scaling for multiple displays.
### Scenario when this would be used?
This feature is very practical when using multiple monitors.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,652,239,118 | svelte | <svelte:element> hydration is broken in with SvelteKit | ### Describe the bug
I'm not sure if this should be in Svelte or Kit repo, but since it appeared with Svelte 5, i'm posting it here.
`<svelte:element>` is not properly hydrated when using SSR with SvelteKit.
With the following code, the page should render a `button` once hydrated:
- it does with Svelte 4
- it does not with Svelte 5
```svelte
<script>
import { browser } from '$app/environment';
$: t = browser ? 'button' : 'div';
$: console.log('T', t);
</script>
<svelte:element this={t}>{t}</svelte:element>
```
### Reproduction
https://stackblitz.com/edit/sveltejs-kit-template-default-qcekrk?file=package.json,src%2Froutes%2F%2Bpage.svelte
### Logs
_No response_
### System Info
```shell
System:
OS: Linux 5.0 undefined
CPU: (2) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 0 Bytes / 0 Bytes
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
svelte: ^5.1.15 => 5.1.15
```
### Severity
blocking an upgrade | documentation | low | Critical |
2,652,326,722 | langchain | AzureMLChatOnlineEndpoint does NOT support HumanMessage with list of dict content | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import json
import os
from langchain_core.messages import HumanMessage
from langchain_community.chat_models.azureml_endpoint import (
AzureMLChatOnlineEndpoint,
CustomOpenAIChatContentFormatter,
AzureMLEndpointApiType
)
from langchain_groq import ChatGroq
# open the .config.json file
config = json.load(open(os.path.expanduser("../config.json")))
timeout = 60 * 5
# llm = AzureMLChatOnlineEndpoint(
# endpoint_url=config["models"][0]["url"],
# endpoint_api_type=AzureMLEndpointApiType.serverless,
# endpoint_api_key=config["models"][0]["key"],
# content_formatter=CustomOpenAIChatContentFormatter(),
# timeout=timeout
# )
llm = ChatGroq(
model="llama-3.1-8b-instant", # "llama-3.2-11b-vision-preview",
api_key=config["models"][1]["key"],
timeout=timeout
)
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
],
)
response = llm.invoke([message])
print(response.content)
```
### Error Message and Stack Trace (if applicable)
if the LLM is the Groq one no error is raised
if the LLM is the Azure one, the error is the following:
```
"C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Scripts\python.exe" C:\Users\...\git\MultiModelPlayground\mmp\MultiModelPlayground\test_folder\multimodal_azure_ai.py
Traceback (most recent call last):
File "C:\Users\...\git\MultiModelPlayground\mmp\MultiModelPlayground\test_folder\multimodal_azure_ai.py", line 214, in <module>
response = llm.invoke([message])
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 286, in invoke
self.generate_prompt(
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 643, in generate
raise e
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 633, in generate
self._generate_with_cache(
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\chat_models\azureml_endpoint.py", line 274, in _generate
request_payload = self.content_formatter.format_messages_request_payload(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\chat_models\azureml_endpoint.py", line 105, in format_messages_request_payload
chat_messages = [
^
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\chat_models\azureml_endpoint.py", line 106, in <listcomp>
CustomOpenAIChatContentFormatter._convert_message_to_dict(message)
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\chat_models\azureml_endpoint.py", line 65, in _convert_message_to_dict
"content": ContentFormatterBase.escape_special_characters(content),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\llms\azureml_endpoint.py", line 136, in escape_special_characters
prompt = prompt.replace(escape_sequence, escaped_sequence)
^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'replace'
Process finished with exit code 1
```
And the method in which the error is actually raised is the following:
**C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\llms\azureml_endpoint.py**
```python
@staticmethod
def escape_special_characters(prompt: str) -> str:
"""Escapes any special characters in `prompt`"""
escape_map = {
"\\": "\\\\",
'"': '\\"',
"\b": "\\b",
"\f": "\\f",
"\n": "\\n",
"\r": "\\r",
"\t": "\\t",
}
# Replace each occurrence of the specified characters with escaped versions
for escape_sequence, escaped_sequence in escape_map.items():
prompt = prompt.replace(escape_sequence, escaped_sequence)
return prompt
```
called inside the class:
**C:\Users\...\git\MultiModelPlayground\mmp - env - lc3\Lib\site-packages\langchain_community\chat_models\azureml_endpoint.py**
```
class CustomOpenAIChatContentFormatter(ContentFormatterBase):
"""Chat Content formatter for models with OpenAI like API scheme."""
SUPPORTED_ROLES: List[str] = ["user", "assistant", "system"]
@staticmethod
def _convert_message_to_dict(message: BaseMessage) -> Dict:
"""Converts a message to a dict according to a role"""
content = cast(str, message.content)
if isinstance(message, HumanMessage):
return {
"role": "user",
"content": ContentFormatterBase.escape_special_characters(content),
}
elif isinstance(message, AIMessage):
return {
"role": "assistant",
"content": ContentFormatterBase.escape_special_characters(content),
}
elif isinstance(message, SystemMessage):
return {
"role": "system",
"content": ContentFormatterBase.escape_special_characters(content),
}
elif (
isinstance(message, ChatMessage)
and message.role in CustomOpenAIChatContentFormatter.SUPPORTED_ROLES
):
return {
"role": message.role,
"content": ContentFormatterBase.escape_special_characters(content),
}
else:
supported = ",".join(
[role for role in CustomOpenAIChatContentFormatter.SUPPORTED_ROLES]
)
raise ValueError(
f"""Received unsupported role.
Supported roles for the LLaMa Foundation Model: {supported}"""
)
```
### Description
I am trying to use the HumanMessage class that supports the `content` as a list of dict
```python
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
],
)
```
as input for an AzureMLChatOnlineEndpoint LLM
```python
llm = AzureMLChatOnlineEndpoint(
endpoint_url=config["models"][0]["url"],
endpoint_api_type=AzureMLEndpointApiType.serverless,
endpoint_api_key=config["models"][0]["key"],
content_formatter=CustomOpenAIChatContentFormatter(),
timeout=timeout
)
response = llm.invoke([message])
```
but the `CustomOpenAIChatContentFormatter` class expects that the content is a string (as you can see from the above Error description) since performs a `replace` on a attribute that can be, potentially a dict, raising the above mentioned error.
I fixed making my own `ContentFormatter` removing the references to the
`ContentFormatterBase.escape_special_characters(content)`
but it would be nice to have the AzureMLChatOnlineEndpoint/CustomOpenAIChatContentFormatter to support natively the cases in which the Human Message content is a list of dict
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.3.5
> langsmith: 0.1.142
> langchain_groq: 0.2.1
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> groq: 0.11.0
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> typing-extensions: 4.12.2
| ๐ค:bug | low | Critical |
2,652,335,784 | ui | [bug]: fetch() URL is invalid | ### Describe the bug
I am getting the below error when I am initializing shadcn or adding new component
Tried with bun (bunx) and pnpm too
```
โ Preflight checks.
โ Verifying framework. Found Next.js.
โ Validating Tailwind CSS.
โ Validating import alias.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
fetch() URL is invalid
```
### Affected component/components
CLI
### How to reproduce
In New project:
1. Initialize shadcn with the following command : `bunx --bun shadcn@latest init`
In this case I get the following error:
```
โ ง Creating a new Next.js project. This may take a few minutes.
Something went wrong creating a new Next.js project. Please try again.
```

When adding new component:
1. Run the add component command : `bunx --bun shadcn@latest add context-menu`
Here I am getting the following error:
```
โ Checking registry.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
fetch() URL is invalid
```

### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Linux Mint 21.3 Virginia
Cinnamon 6.0.4
Kernel: 5.15.0-125-generic x86_64
PM / Runtime - Bun 1.1.34
PNPM - 9.12.3
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,652,363,306 | opencv | 4.10.0 - visionOS compile errors with VideoIO | ### System Information
OpenCV 4.10.0
GitHub Actions macOS latest
https://github.com/actions/runner-images/blob/main/images/macos/macos-14-Readme.md
### Detailed description
```
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:40:31: error: 'AVCaptureVideoPreviewLayer' is unavailable: not available on visionOS
40 | @property (nonatomic, strong) AVCaptureVideoPreviewLayer* captureVideoPreviewLayer;
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureVideoPreviewLayer.h:33:12: note: 'AVCaptureVideoPreviewLayer' has been explicitly marked unavailable here
33 | @interface AVCaptureVideoPreviewLayer : CALayer
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:90:59: error: 'UIDeviceOrientationDidChangeNotification' is unavailable: not available on visionOS
90 | name:UIDeviceOrientationDidChangeNotification
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:83:39: note: 'UIDeviceOrientationDidChangeNotification' has been explicitly marked unavailable here
83 | UIKIT_EXTERN NSNotificationName const UIDeviceOrientationDidChangeNotification API_UNAVAILABLE(tvos, visionos, watchos) NS_SWIFT_NONISOLATED;
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:92:35: error: 'beginGeneratingDeviceOrientationNotifications' is unavailable: not available on visionOS
92 | [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:49:1: note: 'beginGeneratingDeviceOrientationNotifications' has been explicitly marked unavailable here
49 | - (void)beginGeneratingDeviceOrientationNotifications API_UNAVAILABLE(tvos, visionos); // nestable
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:93:62: error: 'orientation' is unavailable: not available on visionOS
93 | currentDeviceOrientation = [[UIDevice currentDevice] orientation];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:44:51: note: property 'orientation' is declared unavailable here
44 | @property(nonatomic,readonly) UIDeviceOrientation orientation API_UNAVAILABLE(tvos, visionos); // return current device orientation. this will return UIDeviceOrientationUnknown unless device orientation notifications are being generated.
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:44:51: note: 'orientation' has been explicitly marked unavailable here
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:97:74: error: 'UIImagePickerControllerSourceTypeCamera' is unavailable: not available on visionOS
97 | cameraAvailable = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIImagePickerController.h:20:5: note: 'UIImagePickerControllerSourceTypeCamera' has been explicitly marked unavailable here
20 | UIImagePickerControllerSourceTypeCamera API_UNAVAILABLE(visionos),
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:104:49: error: 'AVCaptureVideoOrientationLandscapeLeft' is unavailable: not available on visionOS
104 | self.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureSession.h:140:28: note: 'AVCaptureVideoOrientation' has been explicitly marked unavailable here
140 | typedef NS_ENUM(NSInteger, AVCaptureVideoOrientation) {
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:106:46: error: 'AVCaptureSessionPreset352x288' is unavailable: not available on visionOS
106 | self.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureSessionPreset.h:81:41: note: 'AVCaptureSessionPreset352x288' has been explicitly marked unavailable here
81 | AVF_EXPORT AVCaptureSessionPreset const AVCaptureSessionPreset352x288 API_AVAILABLE(macos(10.7), ios(5.0), macCatalyst(14.0), tvos(17.0)) API_UNAVAILABLE(visionos) API_UNAVAILABLE(watchos);
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:123:59: error: 'UIDeviceOrientationDidChangeNotification' is unavailable: not available on visionOS
123 | name:UIDeviceOrientationDidChangeNotification
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:83:39: note: 'UIDeviceOrientationDidChangeNotification' has been explicitly marked unavailable here
83 | UIKIT_EXTERN NSNotificationName const UIDeviceOrientationDidChangeNotification API_UNAVAILABLE(tvos, visionos, watchos) NS_SWIFT_NONISOLATED;
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:125:35: error: 'beginGeneratingDeviceOrientationNotifications' is unavailable: not available on visionOS
125 | [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:49:1: note: 'beginGeneratingDeviceOrientationNotifications' has been explicitly marked unavailable here
49 | - (void)beginGeneratingDeviceOrientationNotifications API_UNAVAILABLE(tvos, visionos); // nestable
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:126:62: error: 'orientation' is unavailable: not available on visionOS
126 | currentDeviceOrientation = [[UIDevice currentDevice] orientation];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:44:51: note: property 'orientation' is declared unavailable here
44 | @property(nonatomic,readonly) UIDeviceOrientation orientation API_UNAVAILABLE(tvos, visionos); // return current device orientation. this will return UIDeviceOrientationUnknown unless device orientation notifications are being generated.
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:44:51: note: 'orientation' has been explicitly marked unavailable here
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:130:74: error: 'UIImagePickerControllerSourceTypeCamera' is unavailable: not available on visionOS
130 | cameraAvailable = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIImagePickerController.h:20:5: note: 'UIImagePickerControllerSourceTypeCamera' has been explicitly marked unavailable here
20 | UIImagePickerControllerSourceTypeCamera API_UNAVAILABLE(visionos),
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:137:49: error: 'AVCaptureVideoOrientationLandscapeLeft' is unavailable: not available on visionOS
137 | self.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureSession.h:140:28: note: 'AVCaptureVideoOrientation' has been explicitly marked unavailable here
140 | typedef NS_ENUM(NSInteger, AVCaptureVideoOrientation) {
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:139:46: error: 'AVCaptureSessionPreset640x480' is unavailable: not available on visionOS
139 | self.defaultAVCaptureSessionPreset = AVCaptureSessionPreset640x480;
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureSessionPreset.h:91:41: note: 'AVCaptureSessionPreset640x480' has been explicitly marked unavailable here
91 | AVF_EXPORT AVCaptureSessionPreset const AVCaptureSessionPreset640x480 API_AVAILABLE(macos(10.7), ios(4.0), macCatalyst(14.0), tvos(17.0)) API_UNAVAILABLE(visionos) API_UNAVAILABLE(watchos);
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:152:31: error: 'endGeneratingDeviceOrientationNotifications' is unavailable: not available on visionOS
152 | [[UIDevice currentDevice] endGeneratingDeviceOrientationNotifications];
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:50:1: note: 'endGeneratingDeviceOrientationNotifications' has been explicitly marked unavailable here
50 | - (void)endGeneratingDeviceOrientationNotifications API_UNAVAILABLE(tvos, visionos);
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:241:64: error: 'orientation' is unavailable: not available on visionOS
241 | UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h:44:51: note: 'orientation' has been explicitly marked unavailable here
44 | @property(nonatomic,readonly) UIDeviceOrientation orientation API_UNAVAILABLE(tvos, visionos); // return current device orientation. this will return UIDeviceOrientationUnknown unless device orientation notifications are being generated.
| ^
/Users/runner/work/apothecary/apothecary/apothecary/build/opencv/modules/videoio/src/cap_ios_abstract_camera.mm:270:30: error: 'canSetSessionPreset:' is unavailable: not available on visionOS
270 | if ([self.captureSession canSetSessionPreset:self.defaultAVCaptureSessionPreset]) {
| ^
/Applications/Xcode_16.app/Contents/Developer/Platforms/XROS.platform/Developer/SDKs/XROS2.0.sdk/System/Library/Frameworks/AVFoundation.framework/Headers/AVCaptureSession.h:184:1: note: 'canSetSessionPreset:' has been explicitly marked unavailable here
184 | - (BOOL)canSetSessionPreset:(AVCaptureSessionPreset)preset API_UNAVAILABLE(visionos);
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
12 warnings and 20 errors generated.
make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_ios_abstract_camera.mm.o] Error 1
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
make: *** [all] Error 2
```
### Steps to reproduce
Builing with VideoIO
```
mkdir -p "build_${TYPE}_${PLATFORM}"
cd "build_${TYPE}_${PLATFORM}"
rm -f CMakeCache.txt || true
CORE_DEFS="
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_STANDARD=${C_STANDARD} \
-DCMAKE_CXX_STANDARD=${CPP_STANDARD} \
-DCMAKE_CXX_STANDARD_REQUIRED=ON \
-DCMAKE_CXX_EXTENSIONS=OFF \
-DBUILD_SHARED_LIBS=OFF \
-DCMAKE_INSTALL_PREFIX=Release \
-DCMAKE_INCLUDE_OUTPUT_DIRECTORY=include \
-DCMAKE_INSTALL_INCLUDEDIR=include \
-DZLIB_ROOT=${ZLIB_ROOT} \
-DZLIB_LIBRARY=${ZLIB_LIBRARY} \
-DZLIB_INCLUDE_DIRS=${ZLIB_INCLUDE_DIR} \
-DPNG_ROOT=${LIBPNG_ROOT} \
-DPNG_PNG_INCLUDE_DIR=${LIBPNG_INCLUDE_DIR} \
-DPNG_LIBRARY=${LIBPNG_LIBRARY}"
DEFS="
-DBUILD_DOCS=OFF \
-DENABLE_BUILD_HARDENING=ON \
-DBUILD_EXAMPLES=OFF \
-DBUILD_FAT_JAVA_LIB=OFF \
-DBUILD_JASPER=OFF \
-DBUILD_PACKAGE=OFF \
-DBUILD_opencv_java=OFF \
-DBUILD_opencv_python=OFF \
-DBUILD_opencv_python2=OFF \
-DBUILD_opencv_python3=OFF \
-DBUILD_opencv_apps=OFF \
-DBUILD_opencv_highgui=ON \
-DBUILD_opencv_imgcodecs=ON \
-DBUILD_opencv_stitching=ON \
-DBUILD_opencv_calib3d=ON \
-DBUILD_opencv_objdetect=ON \
-DOPENCV_ENABLE_NONFREE=OFF \
-DWITH_PNG=ON \
-DBUILD_PNG=OFF \
-DWITH_1394=OFF \
-DWITH_IMGCODEC_HDR=ON \
-DWITH_CARBON=OFF \
-DWITH_JPEG=OFF \
-DWITH_TIFF=ON \
-DWITH_FFMPEG=ON \
-DWITH_QUIRC=ON \
-DWITH_GIGEAPI=OFF \
-DBUILD_OBJC=ON \
-DWITH_CUDA=OFF \
-DWITH_METAL=ON
-DWITH_CUFFT=OFF \
-DWITH_JASPER=OFF \
-DWITH_LIBV4L=OFF \
-DWITH_IMAGEIO=OFF \
-DWITH_IPP=OFF \
-DWITH_OPENNI=OFF \
-DWITH_OPENNI2=OFF \
-DWITH_QT=OFF \
-DWITH_QUICKTIME=OFF \
-DWITH_V4L=OFF \
-DWITH_PVAPI=OFF \
-DWITH_OPENEXR=OFF \
-DWITH_EIGEN=ON \
-DBUILD_TESTS=OFF \
-DWITH_LAPACK=OFF \
-DWITH_WEBP=OFF \
-DWITH_GPHOTO2=OFF \
-DWITH_VTK=OFF \
-DWITH_CAP_IOS=ON \
-DWITH_WEBP=ON \
-DWITH_GTK=OFF \
-DWITH_GTK_2_X=OFF \
-DWITH_MATLAB=OFF \
-DWITH_OPENVX=ON \
-DWITH_ADE=OFF \
-DWITH_TBB=OFF \
-DWITH_OPENGL=OFF \
-DWITH_GSTREAMER=OFF \
-DVIDEOIO_PLUGIN_LIST=gstreamer \
-DWITH_IPP=OFF \
-DWITH_IPP_A=OFF \
-DBUILD_ZLIB=OFF \
-DWITH_ITT=OFF "
if [[ "$ARCH" =~ ^(arm64|SIM_arm64|arm64_32)$ ]]; then
EXTRA_DEFS="-DCV_ENABLE_INTRINSICS=OFF -DWITH_CAROTENE=OFF"
else
EXTRA_DEFS="-DCV_ENABLE_INTRINSICS=ON "
fi
if [[ "$TYPE" =~ ^(tvos|catos|xros)$ ]]; then
EXTRA_DEFS="$EXTRA_DEFS -DBUILD_opencv_videoio=OFF -DBUILD_opencv_videostab=OFF"
else
EXTRA_DEFS="-DBUILD_opencv_videoio=ON -DBUILD_opencv_videostab=ON"
fi
cmake .. ${CORE_DEFS} ${DEFS} ${EXTRA_DEFS} \
-DCMAKE_PREFIX_PATH="${LIBS_ROOT}" \
-DCMAKE_TOOLCHAIN_FILE=$APOTHECARY_DIR/toolchains/ios.toolchain.cmake \
-DPLATFORM=$PLATFORM \
-DENABLE_BITCODE=OFF \
-DENABLE_ARC=ON \
-DDEPLOYMENT_TARGET=${MIN_SDK_VER} \
-DENABLE_VISIBILITY=OFF \
-DCMAKE_POSITION_INDEPENDENT_CODE=ON \
-DENABLE_FAST_MATH=OFF \
-DCMAKE_EXE_LINKER_FLAGS="-framework Foundation -framework AVFoundation -framework CoreFoundation -framework CoreVideo" \
-DCMAKE_CXX_FLAGS="-fvisibility-inlines-hidden -stdlib=libc++ -fPIC -Wno-implicit-function-declaration -DUSE_PTHREADS=1 ${FLAG_RELEASE}" \
-DCMAKE_C_FLAGS="-fvisibility-inlines-hidden -stdlib=libc++ -fPIC -Wno-implicit-function-declaration -DUSE_PTHREADS=1 ${FLAG_RELEASE}" \
-DENABLE_STRICT_TRY_COMPILE=ON \
-DCMAKE_VERBOSE_MAKEFILE=${VERBOSE_MAKEFILE}
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | category: build/install,platform: ios/osx | low | Critical |
2,652,446,982 | deno | deno task doesn't handle signals correctly (repro comparison with npm run attached) | Version: Deno 2.0.6
deno tasks doesn't handle signals properly (but npm scripts do). I created a simple repro for issue here:
https://github.com/mo/deno-tasks-signal-issue-repro
./docker-compose.sh and "npm run docker-compose" both exit immediately when CTRL-C is press twice.
However, "deno task docker-compose" will instead exit deno but leave docker compose running for the full 10sec grace period while printing the docker compose progress spinner onto my shell prompt.
Deno task should just work like npm run instead.
| needs info,task runner | low | Minor |
2,652,448,715 | pytorch | `torch.export` ViT+flex attention: `Attempting to use FunctionalTensor on its own` | ### ๐ Describe the bug
Exporting a VIT with flexattention I got this error https://github.com/pytorch/pytorch/issues/137759#issuecomment-2470595683
As in that thread It seems that is a distinct issue I've opened this one.
### Versions
nightly
cc @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | oncall: pt2,module: higher order operators,oncall: export,module: pt2-dispatcher,module: flex attention | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.