id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,507,459,797 | yt-dlp | `--cookies-from-browser`: WARNING: failed to decrypt with DPAPI / ERROR: 'NoneType' object has no attribute 'decode' | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Happens on all sites.
After removing the "--cookies-from-browser chrome", the error does not occur.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU -f 248+251 --proxy http://127.0.0.1:10809 --cookies-from-browser chrome --mark-watched https://www.youtube.com/watch?v=v9lMa4BGAuA
[debug] Command-line config: ['-vU', '-f', '248+251', '--proxy', 'http://127.0.0.1:10809', '--cookies-from-browser', 'chrome', '--mark-watched', 'https://www.youtube.com/watch?v=v9lMa4BGAuA']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.26120-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2023-07-19-git-efa6cec759-full_build-www.gyan.dev (setts), ffprobe 2023-07-19-git-efa6cec759-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {'all': 'http://127.0.0.1:10809'}
Extracting cookies from chrome
[Cookies] Searching for "Cookies": 1 files searched
[debug] Extracting cookies from: "C:\Users\biges\AppData\Local\Google\Chrome\User Data\Default\Network\Cookies"
[debug] Found local state file at "C:\Users\biges\AppData\Local\Google\Chrome\User Data\Local State"
[Cookies] Loading cookie 1842/ 2088WARNING: failed to decrypt with DPAPI
Traceback (most recent call last):
File "yt_dlp\__main__.py", line 17, in <module>
File "yt_dlp\__init__.py", line 1081, in main
File "yt_dlp\__init__.py", line 979, in _real_main
File "yt_dlp\YoutubeDL.py", line 720, in __init__
File "yt_dlp\YoutubeDL.py", line 4070, in print_debug_header
File "functools.py", line 967, in __get__
File "yt_dlp\YoutubeDL.py", line 4245, in _request_director
File "yt_dlp\YoutubeDL.py", line 4220, in build_request_director
File "functools.py", line 967, in __get__
File "yt_dlp\YoutubeDL.py", line 4116, in cookiejar
File "yt_dlp\cookies.py", line 94, in load_cookies
File "yt_dlp\cookies.py", line 115, in extract_cookies_from_browser
File "yt_dlp\cookies.py", line 315, in _extract_chrome_cookies
File "yt_dlp\cookies.py", line 350, in _process_chrome_cookie
File "yt_dlp\cookies.py", line 525, in decrypt
AttributeError: 'NoneType' object has no attribute 'decode'
[17148] Failed to execute script '__main__' due to unhandled exception!
```
| bug,external issue,core:cookies | high | Critical |
2,507,460,026 | vscode | Chat - agent hover alignment issues | * Disable wrapping for the agent name + text ellipsis in case it is too long

| bug,chat | low | Major |
2,507,473,314 | godot | Shift + ui_accept does not work with pressed signal on button | ### Tested versions
Godot v4.4.dev1
Godot v4.0.stable
### System information
Fedora Linux 40 (KDE Plasma) - Wayland - Vulkan (Forward+)
### Issue description
Shift + ui_accept (same with Ctrl, Alt) does not work with pressed signal on button when usign this code:
_This code works if you switch KEY_SHIFT to KEY_A_
```
func _on_pressed() -> void:
if Input.is_key_pressed(KEY_SHIFT):
print("Shift was pressed")
```
This works in Godot 3
### Steps to reproduce
connect pressed signal to a script
see code above
grab button focus
press shift + ui_accept
nothing is printed
### Minimal reproduction project (MRP)
[shift-and-accept-in-pressed.zip](https://github.com/user-attachments/files/16888279/shift-and-accept-in-pressed.zip)
| discussion,topic:input,topic:gui | low | Minor |
2,507,530,766 | ant-design | Tooltips in the inline menu doesn't have a consistent design in collapsible state | ### What problem does this feature solve?
When the inline menu is collapsed, we can see a very different design between tooltip for single title, and tooltips to show children of a menu. Ideally, design should look the same (at the moment we can see a black backgrounded tooltip for single item and white backgrounded modal for multiple children or one child).
We can see a much better behavior on vertical menu, but they are not collapsible and we can't expand children on the list using this type of menu. The tooltips are more consistent.
This will solved misleading consistency for users, where for a same menu you can have 2 totally different tooltips.
### What does the proposed API look like?
I think components are already created and present on the library. It's just a matter of exposing the right components at the right place to ensure consistency on what the end user is going to see on screen.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,507,532,323 | PowerToys | [Workspaces] Pair window positioning to FancyZones | ### Description of the new feature / enhancement
- Add the option to assign a FancyZone layout and zone for each window after capture
- If no zone is assigned, create an internal FancyZone used only by Workspaces to place the windows with no assignment in the same positions and sizes they were at the moment of capture
- The positioning capture should be a toggle, so if one doesn't care about positioning, one doesn't need to bother with it
### Scenario when this would be used?
Reopening all programs already in the positions we are used to work with
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,507,569,661 | PowerToys | [Workspaces] MS Teams was captured, but doesn't launch | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
- Capture MS Teams
- Launch MS Teams
### ✔️ Expected Behavior
It to launch MS Teams
### ❌ Actual Behavior
It does not launch MS Teams, showing a yellow warning icon

### Other Software
- Windows 10 | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Major |
2,507,574,396 | PowerToys | [Workspaces] Workspaces Editor do not snap to FancyZones | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Try to snap
### ✔️ Expected Behavior
It to snap
### ❌ Actual Behavior
It doesn't snap, even though it's a resizable window
### Other Software
- Windows 10 | Issue-Bug,Resolution-Fix Committed,Product-Workspaces | low | Minor |
2,507,643,489 | PowerToys | The Workspaces feature captures the opened tabs but the contents like tabs in browsers and path in command prompts are missing when it is launched | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Open 5 tabs in Chrome browser and two command prompts from a specific path.
after the capture process, it should open the command prompt and browsers where we captured
### ✔️ Expected Behavior
it should open the command prompt and browsers where we captured
### ❌ Actual Behavior
it is just opening the tabs but the history is gone
### Other Software
none | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,507,644,720 | PowerToys | Gui applications in wsl2 have misaligned context menus with fancyzones enabled | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
1. Setup fancyzones
2. Install WSL2 Ubuntu
2. install gitkraken through instructions [here](https://help.gitkraken.com/gitkraken-desktop/windows-subsystem-for-linux/) to ensure all the required dependencies get deployed. Running this in /tmp prevents some apt related issues.
3. Launch gitkraken, observe the window appear
4. Open a repository with it and right click a commit, or open any of the drop-down menus in the application at the top.
5. Observe that the context menus are visually misaligned from the cursor and difficult to use
6. add msrdc.exe to the exclude list for fancyzones (or exit powertoys)
7. Observe the correct behavior for context menus.
There's a related issue opened for wsgl with video etc. examples, but I opened a bug ticket here because this seems like it may have a fix / workaround in fancyzones itself https://github.com/microsoft/wslg/issues/1116
### ✔️ Expected Behavior
Context menus work as expected
### ❌ Actual Behavior
Context menus are misaligned with the cursor and often un-usable

### Other Software
Windows Subsystem For Linux 2 w. Ubuntu
Remote desktop client | Issue-Bug,Needs-Triage | low | Critical |
2,507,647,788 | PowerToys | "Show this window on all desktops" | ### Description of the new feature / enhancement
For Workspaces, it'd be great to have an option to have a particular window show on all desktops.
### Scenario when this would be used?
This would be used when launching a workspace
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,507,711,365 | pytorch | Incorrect calculation of standard deviation when using complex numbers on MPS backend | ### 🐛 Describe the bug
When using the MPS backend for generating samples of a complex normal distribution (`torch.randn`), the standard deviation is calculated incorrectly.
This leads to two issues:
1. The generated samples of the normal distribution have twice the standard deviation they should have according to [the documentation of torch.randn](https://pytorch.org/docs/stable/generated/torch.randn.html), and
2. When calculating the standard deviation of a number of samples, the results differ completely depending on whether the calculation is carried out on the CPU or MPS
This example demonstrated the issues:
```
import torch
# Generate samples from a normal distribution
N = 10**5
standard_deviation = 1
samples_cpu = torch.randn(N, dtype=torch.complex64, device='cpu') * standard_deviation
samples_mps = torch.randn(N, dtype=torch.complex64, device='mps') * standard_deviation
# Generate samples seperately using 1. cpu, 2. mps, calculate the standard deviation
print( 'Real part' )
print( 'CPU: std = %.4f' % samples_cpu.real.std() )
print( 'MPS: std = %.4f' % samples_mps.real.std() )
print( '\nImaginary part' )
print( 'CPU: std = %.4f' % samples_cpu.imag.std() )
print( 'MPS: std = %.4f' % samples_mps.imag.std() )
print( '\nComplex standard deviation' )
print( 'CPU: std = %.4f' % samples_cpu.std() )
print( 'MPS: std = %.4f + %.4fi' % (samples_mps.std().real, samples_mps.std().imag) )
# Use the CPU generated samples but calculate std 1. using cpu, 2. using mps
print( '\nUsing only the CPU-generated samples:' )
print( 'CPU: std = %.4f' % samples_cpu.std() )
print( 'MPS: std = %.4f + %.4fi' % (samples_cpu.to('mps').std().real, samples_cpu.to('mps').std().imag) )
```
### Versions
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.2.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Nov 10 2023, 13:38:27) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-14.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[conda] Could not collect
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @amjames @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,module: complex,module: correctness (silent),module: mps | low | Critical |
2,507,721,541 | pytorch | DISABLED test_tmp_not_defined_issue2_dynamic_shapes_cpu (__main__.DynamicShapesCodegenCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tmp_not_defined_issue2_dynamic_shapes_cpu&suite=DynamicShapesCodegenCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29717243451).
Over the past 3 hours, it has been determined flaky in 36 workflow(s) with 36 failures and 36 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tmp_not_defined_issue2_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 8904, in test_tmp_not_defined_issue2
self.common(forward, args, atol=1e-5, rtol=1e-5)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 399, in common
return check_codegen(
^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 78, in check_codegen
_, code = run_and_get_cpp_code(run, *example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/utils.py", line 1936, in run_and_get_cpp_code
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 72, in run
def run(*ex, **kwargs):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenCpuTests.test_tmp_not_defined_issue2_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,507,721,546 | pytorch | DISABLED test_dtype_sympy_expr_dynamic_shapes_cpu (__main__.DynamicShapesCodegenCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dtype_sympy_expr_dynamic_shapes_cpu&suite=DynamicShapesCodegenCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29710808939).
Over the past 3 hours, it has been determined flaky in 36 workflow(s) with 36 failures and 36 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dtype_sympy_expr_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 8550, in test_dtype_sympy_expr
result = fn(torch.randn([1, 2, 16, 4]).requires_grad_())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 8545, in fn
@torch._dynamo.optimize_assert("inductor")
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenCpuTests.test_dtype_sympy_expr_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,507,721,775 | pytorch | DISABLED test_embedding_bag_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_embedding_bag_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29710808939).
Over the past 3 hours, it has been determined flaky in 27 workflow(s) with 27 failures and 27 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_embedding_bag_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 4936, in test_embedding_bag
self.common(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 381, in check_model
eager_result = model(*ref_inputs, **ref_kwargs)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 4934, in fn
return aten._embedding_bag(w, i, o, False, 0, False, None)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_ops.py", line 1116, in __call__
return self._op(*args, **(kwargs or {}))
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_embedding_bag_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor_dynamic_shapes.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | triaged,module: flaky-tests,skipped,oncall: pt2,oncall: export,module: aotinductor | medium | Critical |
2,507,723,250 | pytorch | DISABLED test_torch_manual_seed_seeds_cuda_devices (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_torch_manual_seed_seeds_cuda_devices&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29712538122).
Over the past 3 hours, it has been determined flaky in 16 workflow(s) with 48 failures and 16 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_torch_manual_seed_seeds_cuda_devices`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1232, in not_close_error_metas
pair.compare()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 711, in compare
self._compare_values(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 841, in _compare_values
compare_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1023, in _compare_regular_values_close
if torch.all(matches):
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 551, in test_torch_manual_seed_seeds_cuda_devices
self.assertEqual(x, y)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3846, in assertEqual
error_metas = not_close_error_metas(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1239, in not_close_error_metas
f"Comparing\n\n"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 378, in __repr__
body = [
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 379, in <listcomp>
f" {name}={value!s},"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 464, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 708, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 625, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 357, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 145, in __init__
nonzero_finite_vals = torch.masked_select(
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_torch_manual_seed_seeds_cuda_devices
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda_expandable_segments.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_cuda_expandable_segments.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 | triaged,module: flaky-tests,skipped,module: unknown | low | Critical |
2,507,764,646 | PowerToys | Microsoft PowerToys for Windows Server 2019 or Microsoft PowerToys for RDP | ### Description of the new feature / enhancement
Good afternoon, Is it possible to implement Microsoft PowerToys for Windows Server 2019 or Microsoft PowerToys for RDP?
The work is related to remote RDP connection. I connect to Windows Server 2019 remotely.
### Scenario when this would be used?
I would like to see the possibility of implementing at least partial functionality for RDP:
Always On Top
FancyZones
Image Resizer
PowerRename
Text Extractor
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,507,783,675 | TypeScript | Allow --build with Intermediate Errors - shouldn't happen on tsconfig errors | ### 🔎 Search Terms
intermediate error
### 🕗 Version & Regression Information
This is about new feature in 5.6
### ⏯ Playground Link
-
### 💻 Code
tsconfig.json
```
{
"extends": "../wrong/path/to/some/tsconfig.json",
"compilerOptions": {
// nothing, expects to inherit things
}
}
```
### 🙁 Actual behavior
When compiling such project, `tsc` does complain that it cannot find the base config, however it continues building. This is a big problem when the base config contains important things like `"outDir": "${configDir}/lib"` - TS will happily build using the defaults, writing a ton of .js and .d.ts files next to the source files.
### 🙂 Expected behavior
For tsconfig.json file errors, do not continue building the project.
### Additional information about the issue
_No response_ | Suggestion,Awaiting More Feedback,Domain: tsc -b | low | Critical |
2,507,821,570 | opencv | Wrong pixel value after resize with Linear interpolation | ### System Information
OpenCV version: 4.7.0
Operating System / Platform: Debian 12
Compiler & compiler version: GCC 12.2.0
### Detailed description
Pixel values are sometimes wrong (off by one) when using `cv2.resize` with `cv2.INTER_LINEAR_EXACT`.
### Steps to reproduce
To reproduce run the following script. It resizes 7x7 image to 3x3 image. The bottom right pixel value in the resized image is wrong, it should be 129, but is 128. It should be 129, because the bilinear interpolation for the pixel value is 128.555555555555, which should be rounded to 129. We can get the 128.555555555 by numeric derivation or by the `exact_computation` function in the following script.
```python
import cv2
import numpy as np
import math
def exact_computation(q11, q12, q21, q22, pix_x, pix_y, orig_x, orig_y, scale_x, scale_y):
# 0.5 must be added, because pixel values are in the middle of a pixel
x1 = math.floor((pix_x + 0.5) * scale_x) + 0.5
y1 = math.floor((pix_y + 0.5) * scale_y) + 0.5
x2 = math.ceil((pix_x + 0.5) * scale_x) + 0.5
y2 = math.ceil((pix_y + 0.5) * scale_y) + 0.5
x = (pix_x + 0.5) * scale_x
y = (pix_y + 0.5) * scale_y
# equations from https://en.wikipedia.org/wiki/Bilinear_interpolation section Repeated linear interpolation
f_x_y1 = ((x2 - x) / (x2 - x1)) * q11 + ((x - x1) / (x2 - x1)) * q21
f_x_y2 = ((x2 - x) / (x2 - x1)) * q12 + ((x - x1) / (x2 - x1)) * q22
f_x_y = ((y2 - y) / (y2 - y1)) * f_x_y1 + ((y - y1) / (y2 - y1)) * f_x_y2
return f_x_y
image = np.zeros((7, 7), dtype='uint8')
image[5, 5] = 78
image[5, 6] = 151
image[6, 5] = 245
image[6, 6] = 53
new_height = 3
new_width = 3
result = cv2.resize(image, (3, 3), cv2.INTER_LINEAR_EXACT)
print(result[2, 2])
correct_result = exact_computation(image[5, 5], image[6, 5], image[5, 6], image[6, 6], 2, 2, 7, 7, 7 / 3, 7 / 3)
print(correct_result)
assert result[2, 2] == 129
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | question (invalid tracker) | low | Minor |
2,507,844,830 | yt-dlp | Fails to utilize proxy | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Development release of yt-dlp 2024.9.2.232855.dev0 works locally, but when deployed to cloud it constantly faces IP bans (requires to sign in to verify that it is not a bot). I tried to use Zyte proxies with yt-dlp, but it seems that it fails to utilize them.
I guess it is problem that it lacks functionality of verification of CA certificates.
Even though it has option "--client-certificate", it did not help. Might be this option is not developed to work with CA certificates?
youtube-transcript-api had the similar issue that it failed to utilize Zyte proxies and it was resolved by this fork where verification of CA certificate was added:
https://github.com/jdepoix/youtube-transcript-api/issues/303#issuecomment-2274163459
https://github.com/danielsanmartin/youtube-transcript-api/commit/f6f51d007f14fd588a3c95c4c7abb6d36f536e4a
Could you add option to verify the CA certificates, so that it would be possible to use Zyte proxies for ban handling with the library? I tried a number of ways to use Zyte proxies without verifying zyte-ca.crt, but without the verification of zyte-ca.crt, it does not work and gives an error.
Here is Zyte’s information on their zyte-ca.crt certificate: https://docs.zyte.com/misc/ca.html
I am unsure whether this issue is a bug or a feature request, as I do not fully understand what causes it. I used the same Zyte proxy and zyte-ca.crt certificate with the 'requests' library and the mentioned youtube-transcript-api branch, where it worked. However, it fails here for some reason.
How I was trying to utilize Zyte proxy with yt-dlp:
First, I ran the Python code that contained this:
```
zyte_proxy = f"https://{zyte_api_key}:@proxy.zyte.com:8011"
zyte_certificate_path = "/path/to/zyte-ca.crt"
# yt-dlp options
ydl_opts = {
'format': 'm4a/bestaudio/best',
'proxy': zyte_proxy, # Use Zyte proxy
'--client-certificate': zyte_certificate_path, # Path to the Zyte certificate
'nocheckcertificate': False, # Ensure certificates are verified
'postprocessors': [{ # Extract audio using ffmpeg
'key': 'FFmpegExtractAudio',
'preferredcodec': 'm4a',
}],
}
```
I receive in terminal this:
"
> Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP.
"
Then I update zyte_proxy URL to be HTTP, and run again, and receive in terminal this:
"
> [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
"
Then I remove options "nocheckcertificate" and "--client-certificate" and run again, and receive this in terminal: "
> Unable to download webpage: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate
"
Then I update zyte_proxy URL to HTTPS and run again, and receive this:
"
> Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP.
"
Also, when I ran command "yt-dlp -U" in terminal it prints:
"
> Latest version: nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds
> yt-dlp is up to date (nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds)
"
I also added verbose output of this code:
```
zyte_proxy = f"https://{zyte_api_key}:@proxy.zyte.com:8011"
zyte_certificate_path = "/path/to/zyte-ca.crt"
# yt-dlp options
ydl_opts = {
'format': 'm4a/bestaudio/best',
'verbose': True,
'proxy': zyte_proxy, # Use Zyte proxy
'--client-certificate': zyte_certificate_path, # Path to the Zyte certificate
'nocheckcertificate': False, # Ensure certificates are verified
'postprocessors': [{ # Extract audio using ffmpeg
'key': 'FFmpegExtractAudio',
'preferredcodec': 'm4a',
}],
}
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds [e8e6a982a] (pip) API
[debug] params: {'format': 'm4a/bestaudio/best', 'verbose': True, 'proxy': 'https://I_hided_zyte_api_key_here:@proxy.zyte.com:8011', '--client-certificate': '/path/to/zyte-ca.crt', 'nocheckcertificate': False, 'postprocessors': [{'key': 'FFmpegExtractAudio', 'preferredcodec': 'm4a'}], 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}
[debug] Python 3.10.14 (CPython arm64 64bit) - macOS-14.5-arm64-arm-64bit (OpenSSL 3.3.1 4 Jun 2024)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.0, urllib3-2.2.2, websockets-13.0.1
[debug] Proxy map: {'all': 'https://I_hided_zyte_api_key_here:@proxy.zyte.com:8011'}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1832 extractors
[youtube] Extracting URL: https://www.youtube.com/watch?v=2WcbPcGrQZU
[youtube] 2WcbPcGrQZU: Downloading webpage
WARNING: [youtube] Unable to download webpage: ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)')))
[youtube] 2WcbPcGrQZU: Downloading ios player API JSON
[youtube] 2WcbPcGrQZU: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))). Retrying (1/3)...
WARNING: [youtube] ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))). Retrying (2/3)...
[youtube] 2WcbPcGrQZU: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))). Retrying (3/3)...
[youtube] 2WcbPcGrQZU: Downloading ios player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))) (caused by ProxyError("('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)')))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[youtube] 2WcbPcGrQZU: Downloading iframe API JS
WARNING: [youtube] Unable to download webpage: ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)')))
[youtube] 2WcbPcGrQZU: Downloading web creator player API JSON
WARNING: [youtube] ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))). Retrying (1/3)...
[youtube] 2WcbPcGrQZU: Downloading web creator player API JSON
[youtube] 2WcbPcGrQZU: Downloading web creator player API JSON
WARNING: [youtube] ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))). Retrying (2/3)...
WARNING: [youtube] ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))). Retrying (3/3)...
[youtube] 2WcbPcGrQZU: Downloading web creator player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)'))) (caused by ProxyError("('Unable to connect to proxy. Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1007)')))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [youtube] 2WcbPcGrQZU: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py", line 4263, in _real_extract
webpage, master_ytcfg, player_responses, player_url = self._download_player_responses(url, smuggled_data, video_id, webpage_url)
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py", line 4227, in _download_player_responses
player_responses, player_url = self._extract_player_responses(
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py", line 3904, in _extract_player_responses
raise ExtractorError('Failed to extract any player response')
Traceback (most recent call last):
File "/path/to/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
File "/path/to/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py", line 4263, in _real_extract
webpage, master_ytcfg, player_responses, player_url = self._download_player_responses(url, smuggled_data, video_id, webpage_url)
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py", line 4227, in _download_player_responses
player_responses, player_url = self._extract_player_responses(
File "/path/to/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py", line 3904, in _extract_player_responses
raise ExtractorError('Failed to extract any player response')
yt_dlp.utils.ExtractorError: [youtube] 2WcbPcGrQZU: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/te_yt_dlp_with_proxy_and_certificate_2.py", line 29, in <module>
error_code = ydl.download(URLS)
File "/path/to/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 3607, in download
self.__download_wrapper(self.extract_info)(
File "/path/to/Documents/miniconda3/envs/dev_conda_based_no_captions_files_video_summary_1/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 3582, in wrapper
res = func(*args, **kwargs)
File "/path/to/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1615, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "/path/to/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1644, in wrapper
self.report_error(str(e), e.format_traceback())
File "/path/to/Documents/miniconda3/envs/dev_conda_based_no_captions_files_video_summary_1/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1092, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "/path/to/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1031, in trouble
raise DownloadError(message, exc_info)
yt_dlp.utils.DownloadError: ERROR: [youtube] 2WcbPcGrQZU: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| bug,triage,core:networking | low | Critical |
2,507,874,103 | TypeScript | Constraints of nested conditional types applied to constrained type variables are incorrect | ### 🔎 Search Terms
constraints conditional types type variables nested
### 🕗 Version & Regression Information
- This is the behavior in every version I tried
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.0-dev.20240904#code/C4TwDgpgBAkgzgWQIbAMYAsCWA7A5gZWACcdcB5bCAGQgDcIAbAHgBUoIAPYCbAEzihxipAHxQAvFDadufAQAMkAEgDeQkngC+8gFBQoAfijEArhD1QAXFABmSBnAgBuHTpsnsqYJgD22WwBMTPjsXDz8gsJ4IgAUHNbwyGhYeIQa5JQ09Mz4IgCUUCoWDBDAxtam0JIcTlAA9HVQAKJERD5ExaW21nYOVVA19Y0tbR2arqCQsIgoGKRppCwA7j5ZjHCsobIR6qISUlvhCspqUbjaFkbSYXJQ8qq7WgBeuvr6RpUW+j32jhY-fRcbg8Xl8-hsAGZgodbo9cLF4tMknNUmdlqs6OtgvlCp0ysAKkQzPtBg1mq12njurZfv1SY0yABrKAAIxMZTg6B8JgYvFZ0CQ-ggFLGOiAA
### 💻 Code
```ts
type IsMatchingStringOneLevel<T extends string> = T extends `a${string}`
? true
: false;
function f2<S extends string>(x: IsMatchingStringOneLevel<S>) {
let t: true = x; // Error
let f: false = x; // Error
}
type IsMatchingStringTwoLevels<T extends string> = T extends `a${string}`
? T extends `${string}z`
? true
: false
: false;
function f3<S extends string>(x: IsMatchingStringTwoLevels<S>) {
let t: true = x; // Error
let f: false = x; // Ok but should be an error
}
```
### 🙁 Actual behavior
It doesn't error on the fourth assignment
### 🙂 Expected behavior
It should error
### Additional information about the issue
This is just a variant of what was fixed for one-level conditionals in https://github.com/microsoft/TypeScript/pull/56004 | Bug,Help Wanted | low | Critical |
2,507,939,714 | vscode | Incorrect behavior of granting access for the extension to existing authentication session using Manage Trusted Extensions functionality | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: This issue occurs with VS Code functionality FOR the extensions.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: macOS 13.6.8
### Steps to Reproduce:
1. Create an extension with AuthenticationProvider, based on the sample: https://github.com/microsoft/vscode-extension-samples/blob/main/authenticationprovider-sample/src/authProvider.ts
2. Sign in, using credentials.
3. Go to Accounts -> extension's account -> Manage Trusted Extensions:
<img width="547" alt="image" src="https://github.com/user-attachments/assets/c155e961-6912-49f0-8c0f-5b414bbf1522">
4. Observe the list of trusted extensions for the selected account:
<img width="626" alt="image" src="https://github.com/user-attachments/assets/44b5776f-6e9f-434d-974c-1fbb7a074135">
5. Uncheck the extension from the list of trusted and press OK:
<img width="626" alt="image" src="https://github.com/user-attachments/assets/23f98eda-f22f-4fc5-acb1-7ed9cd4e5de2">
6. Observe an event is fired inside **vscode.authentication.onDidChangeSessions** subscription and method **await vscode.authentication.getSession(MyAuthenticationProvider.id, [], { createIfNone: false })** returns undefined:
<img width="526" alt="image" src="https://github.com/user-attachments/assets/baa226c3-7796-4611-8596-ac2cd22ee7f8">
7. Observe the option _Grant access to {Auth Provider} for {Extension}... (1)_ appeared under the Accounts menu:
<img width="421" alt="image" src="https://github.com/user-attachments/assets/60a71c21-b9de-47a1-9d8c-51476938cf28">
8. Select Accounts -> _Grant access to {Auth Provider} for {Extension}... (1)_
9. Observe a confirmation popup and press Allow:
<img width="262" alt="image" src="https://github.com/user-attachments/assets/ddb29194-98e2-4219-ab75-2fe97454ef7e">
10. Observe an event is fired inside **vscode.authentication.onDidChangeSessions** subscription and method **await vscode.authentication.getSession(MyAuthenticationProvider.id, [], { createIfNone: false })** returns session:
<img width="524" alt="image" src="https://github.com/user-attachments/assets/d24acc06-d44a-4a13-b699-e5ee6cb30faa">
11. Observe _Grant access to My Provider for My Extension... (1)_ is no longer exists in the Accounts menu:
<img width="377" alt="image" src="https://github.com/user-attachments/assets/05ae3d40-2ad1-4ef6-baff-ea9c56b9489c">
12. Repeat steps 3 - 7 for removing the extension from the list of trusted extensions again.
13. Navigate to Accounts -> extension's account -> Manage Trusted Extensions
14. Check the extension as trusted and press OK:
<img width="609" alt="image" src="https://github.com/user-attachments/assets/06cf8611-3325-44b0-b3c7-bd1673b181e9">
15. An event is fired inside **vscode.authentication.onDidChangeSessions** subscription and method **await vscode.authentication.getSession(MyAuthenticationProvider.id, [], { createIfNone: false })** returns session:
<img width="526" alt="image" src="https://github.com/user-attachments/assets/965a0457-4d10-4142-a281-36405b78a805">
### Actual Result:
_Grant access to My Provider for My Extension... (1)_ is still available in the Accounts menu, even though access has been granted:
<img width="425" alt="image" src="https://github.com/user-attachments/assets/8d33f630-91d7-45dd-b562-88cb14d1791d">
### Expected Result:
- _Grant access to My Provider for My Extension... (1)_ should NOT be available in the Accounts menu.
### Consequences:
Since VS Code doesn’t provide a direct API to detect when the extension is checked/unchecked in the "Manage Trusted Extensions" dialog, it is impossible to work with authentication correctly.
While both, user account and _Grant access to My Provider for My Extension... (1)_ are available in the Accounts menu, it allows the possibility to sign in under two accounts, even if { supportsMultipleAccounts: false } was passed during registering the Authentication Provider, using
```
context.subscriptions.push(
vscode.authentication.registerAuthenticationProvider(
'myCustomProvider',
'My Provider,
new MyAuthenticationProvider (context.secrets),
{ supportsMultipleAccounts: false },
),
);
```
To reproduce this behavior follow these steps:
1. Navigate to Accounts -> account -> Sign Out
2. Observe _Sign In with My Provider to use My Extension (1)_ appeared in the Accounts menu
3. Sign in, using another account
4. Observe new user account appeared in the Account menu.
5. Press _Grant access to My Provider for My Extension... (1)_
6. Observe both, previous and new account for the same extension presented in the Accounts menu.
| bug,authentication | low | Critical |
2,507,949,302 | kubernetes | Unable to set new pre-alpha versioned featuregate to enabled during apiserver initialisation in integration tests | While developing a new in-tree feature, I need to add integration tests which conditionally enable/disable my feature. I've added the gate to the 'versioned feature gates' file, which seems to work ok. However, my feature gate's configuration is read upon initialisation of the apiserver (it's read by an admission plugin). This means I *must* set this feature gate to be enabled *prior* to calling `StartTestServerOrDie` (or at least, prior to this function actually running/starting the apiserver startup functions).
This feature is considered 'pre-alpha' in 1.31, as it'd be only become alpha in 1.32 (assuming it's merged etc.). This means if I attempt to set this flag to 'true' without also changing the 'emulation version', I receive an error that the feature cannot be enabled at this version (1.31) as it is PreAlpha.
So, I have updated my call to StartTestServer to set the BinaryVersion in the instance options to '1.32':
```
// Force to run in 1.32 mode as we are testing a feature that only exists (in alpha) from 1.32 onwards.
opts := kubeapiservertesting.NewDefaultTestServerOptions()
opts.BinaryVersion = "1.32"
server := kubeapiservertesting.StartTestServerOrDie(t, opts, framework.DefaultTestServerFlags(), framework.SharedEtcd())
```
However, I still need to set the feature gate itself to enabled, as it's disabled by default (it's an alpha feature). This presents a chicken-egg problem for me:
If I add `featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.SetPodTopologyLabels, true)` before `StartTestServerOrDie`, the call to set the emulation version to 1.32 has not happened yet (it is within StartTestServerOrDie).
If I set the flag *after*, my feature flag value will have already been read by the admission plugin and I won't get the behaviour I need.
The only alternative I can think of here is to set feature flags using the `--feature-flags` argument passed in InstanceOptions, but this is a departure from how we'd have done this prior to the introduction of versioned feature gates, so I wanted to gather some feedback on how we're _supposed_ to enable pre-alpha feature flags that are required during apiserver initialisation.
/area testing
/sig architecture
| kind/bug,area/test,sig/api-machinery,sig/architecture,triage/accepted | low | Critical |
2,507,958,772 | pytorch | [Torch.Export] Failure using export_for_training: KeyError: 'self___conv_layers_0_0.weight' | ### 🐛 Describe the bug
When trying to use torch.export.export_for_training using a sample model like:
```
class SampleModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.input_layers = torch.nn.Sequential(
torch.nn.Conv2d(2, 2, kernel_size=1),
)
self.conv_layers = [
torch.nn.Sequential(
torch.nn.Conv2d(2, 2, kernel_size=1),
)
]
def forward(self, x):
x = self.input_layers(x)
for layer in self.conv_layers:
x = layer(x)
return x
model = SampleModel()
input_data = (torch.randn([2, 2, 1, 1]),)
model.eval()
graph_module = torch.export.export_for_training(model, input_data).module()
```
I get the following error message:
```
spec.target = param_buffer_table[spec.target]
KeyError: self___conv_layers_0_0.weight
```
Any input into this issue would be greatly appreciated!
Thanks!
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0.dev20240901+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA RTX A2000 8GB Laptop GPU
Nvidia driver version: 545.29.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12800H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5606.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==0.4.0a0+ee752f0
[pip3] numpy==1.21.3
[pip3] torch==2.5.0.dev20240901+cpu
[pip3] torchaudio==2.5.0.dev20240901+cpu
[pip3] torchgen==0.0.1
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0.dev20240901+cpu
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,oncall: export | low | Critical |
2,507,980,122 | kubernetes | Unclear with Container garbage collection configuration | ### What happened?
Iam trying to disable maxPerPodContainer feature in my cluster. To achieve this followed the docs [Container garbage collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection) and gone through code related to garbage collection.
Configured Container garbage collection parameters in the kubelet config file and restarted the kubelet sucessfully.
After that if i check, values in kubelet configuration file are persist but those are actually not reflected in cluster.
### What did you expect to happen?
Container garbage collection parameters which i added in config file should reflect in cluster.
**minAge:
maxPerPodContainer:
maxContainers:**
### How can we reproduce it (as minimally and precisely as possible)?
Ex:
1. Add below values in kubelet configuration file: /var/lib/kubelet/config.yaml
```
minAge: "5m"
maxPerPodContainer: 1
maxContainers: 50
```
2. Restart kubelet service:
`sudo systemctl restart kubelet`
Make sure kubelet restarted sucessfully: sudo systemctl status kubelet
3. Run: kubectl proxy
4. Open another terminal and check whether added parameters are present in configuration or not:
curl http://localhost:8001/api/v1/nodes/<hostname/proxy/configz
### Anything else we need to know?
I tried many ways to add these parameters but no use. even these are not listed in api-resources.
https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
Is it possible to define these values in kubelet as mentioned in [docs](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection)? if possible how can we configure them?
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.30.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.3
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| area/kubelet,kind/documentation,sig/node,help wanted,priority/important-longterm,needs-kind,triage/accepted | low | Major |
2,507,995,561 | pytorch | Disable `retain_grad` on non-leaf tensors? | ### 🚀 The feature, motivation and pitch
Non-leaf tensors can be set to retain gradients with `tensor.retain_grad()`, after which `tensor.retains_grad is True` (for non-leaf tensors only it seems).
Is it possible to revert this? If not (which I believe so far), that would be the feature request.
For example, I'd likde the API `tensor.retain_grad(False)`. By the way, I don't really understand why there is no trailing underscore `retain_grad_()`, but that's another point.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | module: autograd,triaged,enhancement,actionable | low | Minor |
2,508,007,693 | vscode | LSP DocumentColor Refresh Request | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
LSP Servers need a way to trigger documentColor refresh requests in VS Code, just like other refresh requests, including the newly added [foldingRange refresh request](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.18/specification/#workspace_foldingRange_refresh) proposed in the 3.18 version of the LSP spec.
## Scenario
XAML color swatch support. Colors in XAML files should appear in the following places. For example:
1. `Background="#FFFF0000"`
2. `Background="Red"`
3. `Background="{StaticResource RedBrush}"`
The first two are possible with the current capabilities. 3) on the other hand, requires semantic information, which is usually not available yet by the time the `documentColor` request is made. `RedBrush` may also be defined in an external document, and when the value for `RedBrush` is changed there it won't get reflected in the current document until a text change is made. To solve this challenge, we should introduce and support a documentColor refresh request: `workspace/documentColor/refresh`
cc @dbaeumer
| feature-request,api | low | Minor |
2,508,009,683 | next.js | The `app-dir-i18n-routing` example doesn't respect defaultLocale and doesn't preserve selected locale while navigating | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.3.0: Thu Dec 21 02:29:41 PST 2023; root:xnu-10002.81.5~11/RELEASE_ARM64_T8122
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 21.7.2
npm: 10.5.0
Yarn: N/A
pnpm: 9.6.0
Relevant Packages:
next: 14.2.8 // Latest available version is detected (14.2.8).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 4.9.5
Next.js Config:
output: N/A
```
### Which example does this report relate to?
app-dir-i18n-routing
### What browser are you using? (if relevant)
Chrome 128.0.6613.119
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
1. The [`dafaultLocale`](https://github.com/vercel/next.js/blob/canary/examples/app-dir-i18n-routing/i18n-config.ts#L2) is not being respected (setting this parameter to anything will still bring `/en` in the url);
2. The locale discards back to the `/en` after changing it to something different(eg `de`) and navigating to the different page.
### Expected Behavior
1. I set `defaultLocale` and i can see it by default.
2. After selecting another locale and navigating to another page - i can see selected locale in the URL.
### To Reproduce
## Issue 1 (`defaultLocale` is not being respected)
### Step 1
Go to the https://github.com/vercel/next.js/blob/canary/examples/app-dir-i18n-routing/i18n-config.ts#L2 and change `defaultLocale` to `de` or `cs`.
### Step 2
Start application.
### Step 3
You will see `http://localhost:3000/en` instead of `http://localhost:3000/de`.
## Issue 2 (locale isn't preserved after changing it and navigating to a different page)
### Step1
Add new page under `[lang]` folder:
```tsx
// app/[lang]/johnny/page.tsx
import { Locale } from "../../../i18n-config";
import Navigation from "../components/navigation";
export default async function JohnnyPage({
params: { lang },
}: {
params: { lang: Locale };
}) {
return (
<div>
<Navigation />
<div>
<p>Current locale: {lang}</p>
</div>
</div>
);
}
```
### Step 2
Add `Navigation` component:
```tsx
// app/[lang]/components/navigation.tsx
import Link from "next/link"
export default function Navigation () {
return (
<nav>
<ul>
<li>
<Link href="/">home</Link>
</li>
<li>
<Link href="/johnny">johnny</Link>
</li>
</ul>
</nav>
)
}
```
### Step 3
Include `Navigation` to the main page:
```tsx
// app/[lang]/page.tsx
import { getDictionary } from "../../get-dictionary";
import { Locale } from "../../i18n-config";
import Counter from "./components/counter";
import LocaleSwitcher from "./components/locale-switcher";
import Navigation from "./components/navigation"; // include Navigation
export default async function IndexPage({
params: { lang },
}: {
params: { lang: Locale };
}) {
const dictionary = await getDictionary(lang);
return (
<div>
<Navigation /> {/* this was added */}
<LocaleSwitcher />
<div>
<p>Current locale: {lang}</p>
<p>
This text is rendered on the server:{" "}
{dictionary["server-component"].welcome}
</p>
<Counter dictionary={dictionary.counter} />
</div>
</div>
);
}
```
### Step 4
Start the application
### Step 5
Change locale from `en` to `de`(or `cs`)
### Step 6
In the navigation click on `johnny`
### Step 7
The redirection will be happen to http://localhost:3000/en/johnny (instead of http://localhost:3000/de/johnny). | examples | low | Critical |
2,508,015,142 | next.js | iFrame will not call the onLoad handler in dev mode | ### Link to the code that reproduces this issue
https://github.com/robcaldecott/iframe-nextjs
### To Reproduce
Clone the repo and first build the app that will be hosted in an `iframe` (this part is build using Vite but that does not matter).
```bash
cd remote
npm install
npm run dev
```
This will start a little app on http://localhost:5173. Leave this running.
Now, in a separate shell, run the NextJS app that exhibits the issue:
```bash
cd host
npm install
npm run dev
```
Open http://localhost:3000 in your browser, and open the DevTools so you can see the console.
Refresh the page and note that there are no console messages and the three `iframe` components on the page do not get the `onLoad` handler called.

However, if you make changes to the source and the app is reloaded by HMR then sometimes the `onLoad` event will work. But this is intermittent.
Next, shut down the NextJS `host`, build it for production and start it up:
```bash
cd host
npm run build
npm start
```
Go to http://localhost:3000 and refresh the page - you will see that `onLoad` is now working for all three `iframe` components correctly.


### Current vs. Expected behavior
When running the NextJS `host` app in dev mode I would expect the `onLoad` handler on the `iframe` elements to work. However, it never works in dev mode when refreshing the page.
In production mode everything works correctly and the three `onLoad` events work as expected.
Included in the repo is a host app written using Vite and the latest React 18 build. This app does not suffer from this issue and `onLoad` works in dev and prodcuction.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.16.0
npm: 10.8.1
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.8 // Latest available version is detected (14.2.8).
eslint-config-next: 14.2.8
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I have tried the following to get the `onLoad` event to fire in dev mode:
- Make everything a client component with `use client`.
- Disabling React Strict mode in the NextJS config.
- Throttling the network.
- Throttling the CPU.
- Using `addEventListener` on the `iframe` `ref` to capture the `load` event.
- Replacing the iframe content with a public website URL.
- Using Safari instead of Chrome.
None of these make any difference: `onLoad` will not work when refreshing the page in dev mode. It will only fire occasionally when editing the code and causing the module to reload.
While the app works fine in production, I am relying on `onLoad` working in dev mode as the host then communicates with each `iframe` using `postMessage` (this code has been removed to simplify the example.) I can hack around this using a `useEffect` with a `setTimeout` to call `setLoaded` but this is not ideal.
| bug | low | Minor |
2,508,029,488 | flutter | `onTap` callback for `DataRow` | ### Use case
Currently, there is no way to press on a row in a Datatable. You can only press on the cells. By adding an `onTap` callback for the whole row, developers can use this to e.g. open details of the row
### Proposal
Add an `onTap` callback to the `Datarow` widget, which works similarly to the `onLongPress` callback, just for a long press. | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
2,508,101,808 | vscode | Test: "Execute Using Profile..." should be "Execute Using Test Profile..." | I was confused how to run unit tests on web with the test runner. Turns out it was this option, but I didn't realize as I thought it was talking about a regular profile.

In terminal land we established a rule that any UI reference to "Terminal profiles" must always include the prefix to prevent this confusion. | polish,testing | low | Minor |
2,508,118,462 | go | x/tools/go/ssa/ssautil: Reachable: conservative approximation to reachable functions, runtime types | **Background:** The existing `ssautil.AllFunctions` helper performs a reachability analysis starting from all the packages in a Go SSA program, and returns the set of functions it encounters:
```go
package ssautil
func AllFunctions(prog *ssa.Program) map[*ssa.Function]bool
```
Unfortunately it is poorly defined and its implementation is a bit of a historical mess; it is complicated and delivers results that are simultaneously "too much" (imprecise) and "too little" (unsound).
An example of imprecision is that it searches the body of every function in the ssa.Program for MakeInterface operations, when only those functions that are reachable from the entry points need be considered. Another example is that it uses all packages of the program as roots for reachability, even though typically the program consists of all dependencies of a few packages of interest.
An example of unsoundness is that it doesn't consider all of the types derivable from the public API of a package as potential dynamic call targets.
Furthermore, its traversal is a lost opportunity to to track information ("address-taken" status of functions) that would be useful to CHA and VTA.
**Proposal:** We propose to add a new function `ssautil.Reachable` that:
- is derived from first principles
- is analytically sound (conservative)
- returns additional information useful to CHA, VTA, and vulncheck, that will improve the precision of their results while reducing their running time.
We plan to redefine the existing `AllFunctions` as a shallow wrapper around the new function.
```go
package ssautil
// Reachable returns a conservative approximation to the set of
// functions and runtime types that are potentially reachable from the
// specified packages (which need not form a whole program).
//
// Any source function in package P that is not among the functions
// (map keys) returned by Reachable(P) is unreachable ("dead
// code"--though for dead-code analysis of whole programs, we
// recommend the more precise RTA).
// Beware: the values of the functions map are not all "true": the
// result is a mapping from functions to their "address-taken" status.
//
// The runtime types returned by Reachable are those that are
// potentially converted to interfaces, making them potential targets
// of interface method calls.
//
// The results are computed using the following algorithm.
//
// First, we tabulate a set of initial functions and runtime types
// from the specified packages.
//
// For packages named "main", the initial functions are the entry
// points: the main function and the synthetic package initializer.
// There are no initial runtime types.
//
// For importable packages, the initial functions include the package
// initializer and all exported non-parameterized functions; these
// functions are assumed to be potentially callable from external
// packages not visible to the analysis. The initial runtime types are
// the types of every exported non-parameterized declaration; these
// types are accessible to reflection from external packages.
//
// Second, we compute a fixed point of the following induction rules:
//
// 1. Each function's instructions are analyzed. Any function
// referenced by an instruction is added to the set. Any
// non-interface type used as the operand of a MakeInterface
// instruction is added to the set of runtime types.
//
// 2. Each type is analyzed. Every element type found by recursively
// removing type constructors is added to the set of runtime types.
//
// 3. For each type in the set, its methods are added to the set of functions.
//
// The initial exported functions are all assumed to be
// "address-taken", making them candidate targets of dynamic function
// calls. All methods found by rule 3 are similarly considered
// address-taken. Functions found by rule 1 are considered
// address-taken unless they are used in the call position of an
// [ssa.CallInstruction].
//
// Finally, it returns the final sets of functions and runtime types.
//
// For best results, provide packages from a Program constructed with the
// [ssa.InstantiateGenerics] mode flag.
func Reachable(pkgs []*ssa.Package) (functions map[*ssa.Function]bool, runtimeTypes *typeutil.Map)
```
@timothy-king @zpavlinovic
| Proposal,Proposal-Accepted,Analysis | low | Major |
2,508,122,660 | flutter | Class "App" is declared twice in code provided in Step #5 on "main.dart" | I ran into this and asked Gemini to review the code as well as the error messages from my terminal.
"App" is declared twice. Once at the top of main.dart and then at the bottom on lines 15,18, and 19 (at the top) and then again on lines 118 and 119 (at the bottom). Lines may vary given comments or other alterations, but in general, these are the locations in the document.
Fix: I just renamed the top class to "MyApp" on the top 3 lines which removes the errors and enables display of the project output to device. I'm suggesting the top class be renamed for the codelab.
Page that I'm looking at and the code is referenced: https://firebase.google.com/codelabs/firebase-get-to-know-flutter#4 | team-codelabs,p: firebase,P2,triaged-codelabs | low | Critical |
2,508,124,894 | flutter | TwoDimensionalScrollView lag for small-many grid | ### Steps to reproduce
You can checkout [dartpad](https://dartpad.dev/?id=01997ebded25bb83b9c297d8c25791f3)
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/gestures.dart';
import 'package:flutter/rendering.dart';
import 'dart:math' as math;
///*I was playing with 2D gridView and my use-case is to render thousands of grids,
/// it is good <k and large grid size(>100px).
/// but it keeps lagging once I have small grids(<20px)
/// idk if it is for heavy computation on layoutChildSequence,
/// I am wondering is there anyway of optimizing RenderTwoDimensionalViewport for small many grid.
/// !https://github.com/yeasin50/game_of_life/blob/master/lib/src/presentation/widgets/two_dimensional_gridview.dart
///
/// Create a grid view on TwoDimensionalScrollView
///
/// ```dart
/// TwoDimensionalGridView(
/// gridDimension: 20,
/// diagonalDragBehavior: DiagonalDragBehavior.free,
/// cacheExtent: 500,
/// delegate: TwoDimensionalChildBuilderDelegate(
/// maxXIndex: 130,
/// maxYIndex: 130,
/// builder: (context, vicinity) {
/// return Container(
/// height: 20,
/// width: 20,
/// color: Colors.primaries[(vicinity.xIndex + vicinity.yIndex) % Colors.primaries.length],
/// alignment: Alignment.center,
/// child: Text(
/// vicinity.toString(),
/// ),
/// );
/// },
/// ),
/// ),
/// ```
///
@Deprecated("Not usable for large items, use [TwoDimensionalCustomPaintGridView] instead")
class TwoDimensionalGridView extends TwoDimensionalScrollView {
const TwoDimensionalGridView({
super.key,
required super.delegate,
required this.gridDimension,
super.primary,
super.mainAxis = Axis.vertical,
super.verticalDetails = const ScrollableDetails.vertical(),
super.horizontalDetails = const ScrollableDetails.horizontal(),
super.cacheExtent,
super.diagonalDragBehavior = DiagonalDragBehavior.none,
super.dragStartBehavior = DragStartBehavior.start,
super.keyboardDismissBehavior = ScrollViewKeyboardDismissBehavior.manual,
super.clipBehavior = Clip.hardEdge,
});
/// size of the grid
final double gridDimension;
@override
Widget buildViewport(
BuildContext context,
ViewportOffset verticalOffset,
ViewportOffset horizontalOffset,
) {
return TwoDimensionalGridViewPort(
gridDimension: gridDimension,
cacheExtent: cacheExtent,
clipBehavior: clipBehavior,
verticalOffset: verticalOffset,
verticalAxisDirection: AxisDirection.down,
horizontalAxisDirection: AxisDirection.right,
horizontalOffset: horizontalOffset,
delegate: delegate,
mainAxis: mainAxis,
);
}
}
class TwoDimensionalGridViewPort extends TwoDimensionalViewport {
const TwoDimensionalGridViewPort({
super.key,
required super.verticalOffset,
required super.verticalAxisDirection,
required super.horizontalOffset,
required super.horizontalAxisDirection,
required super.delegate,
required super.mainAxis,
super.cacheExtent,
super.clipBehavior = Clip.hardEdge,
required this.gridDimension,
});
final double gridDimension;
@override
RenderTwoDimensionalViewport createRenderObject(BuildContext context) {
return RenderTreeViewPostT(
gridDimension: gridDimension,
horizontalOffset: horizontalOffset,
horizontalAxisDirection: horizontalAxisDirection,
verticalOffset: verticalOffset,
verticalAxisDirection: verticalAxisDirection,
delegate: delegate,
mainAxis: mainAxis,
childManager: context as TwoDimensionalChildManager,
);
}
@override
void updateRenderObject(BuildContext context, covariant RenderTwoDimensionalViewport renderObject) {
super.updateRenderObject(context, renderObject);
}
}
class RenderTreeViewPostT extends RenderTwoDimensionalViewport {
RenderTreeViewPostT({
required super.horizontalOffset,
required super.horizontalAxisDirection,
required super.verticalOffset,
required super.verticalAxisDirection,
required super.delegate,
required super.mainAxis,
required super.childManager,
required this.gridDimension,
});
///ig there should be way to get from child instead, while I will have same size grid, so ignoring
final double gridDimension;
@override
void layoutChildSequence() {
// FIXME: The laggy ui
final double horizontalPixels = horizontalOffset.pixels;
final double verticalPixels = verticalOffset.pixels;
final viewPortWidth = viewportDimension.width + cacheExtent;
final viewPortHeight = viewportDimension.height + cacheExtent;
final TwoDimensionalChildBuilderDelegate builderDelegate = delegate as TwoDimensionalChildBuilderDelegate;
final int maxRowIndex = builderDelegate.maxYIndex!;
final int maxColIndex = builderDelegate.maxXIndex!;
final int leadingColumn = math.max((horizontalPixels / gridDimension).floor(), 0);
final int leadingRow = math.max((verticalPixels / gridDimension).floor(), 0);
final int trailingColumn = math.min(((horizontalPixels + viewPortWidth) / gridDimension).ceil(), maxColIndex);
final int trailingRow = math.min(((verticalPixels + viewPortHeight) / gridDimension).ceil(), maxRowIndex);
double xLayoutOffset = (leadingColumn * gridDimension) - horizontalPixels;
for (int x = leadingColumn; x < trailingColumn; x++) {
double yLayoutOffset = (leadingRow * gridDimension) - verticalPixels;
for (int y = leadingRow; y < trailingRow; y++) {
final ChildVicinity childVicinity = ChildVicinity(xIndex: x, yIndex: y);
final RenderBox child = buildOrObtainChildFor(childVicinity)!;
child.layout(constraints.loosen());
parentDataOf(child).layoutOffset = Offset(xLayoutOffset, yLayoutOffset);
yLayoutOffset += gridDimension;
}
xLayoutOffset += gridDimension;
}
final double verticalExtent = gridDimension * (maxRowIndex + 1.0);
verticalOffset.applyContentDimensions(0, (verticalExtent - viewportDimension.height));
final double horizontalExtent = gridDimension * (maxColIndex + 1);
horizontalOffset.applyContentDimensions(0, (horizontalExtent - viewportDimension.width));
}
}
void main() {
runApp(MaterialApp(
debugShowCheckedModeBanner: false,
home: MyApp(),
scrollBehavior: MaterialScrollBehavior().copyWith(
dragDevices: {
PointerDeviceKind.mouse,
PointerDeviceKind.touch,
PointerDeviceKind.stylus,
PointerDeviceKind.unknown
},
),
));
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: TwoDimensionalGridView(
gridDimension: 20,
diagonalDragBehavior: DiagonalDragBehavior.free,
cacheExtent: 500,
delegate: TwoDimensionalChildBuilderDelegate(
maxXIndex: 200,
maxYIndex: 100,
builder: (context, vicinity) {
return SizedBox.expand(
child: ColoredBox(
key: ValueKey(vicinity),
color: Colors.primaries[(vicinity.xIndex + vicinity.yIndex) % Colors.primaries.length ~/ 2],
),
);
},
),
),
);
}
}
```
</details>
### What target platforms are you seeing this bug on?
Android, Web, Windows
### OS/Browser name and version | Device information
Any, Windows10
### Does the problem occur on emulator/simulator as well as on physical devices?
Yes
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.19045.4780], locale en-US)
• Flutter version 3.24.1 on channel stable at P:\symlinks_portal\fvm_cache\versions\3.24.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (2 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
```
</details>
| framework,c: performance,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.25 | low | Critical |
2,508,136,556 | godot | Cannot reliably reposition multiple windows using new 4.3 'Enable Multiple Instances' | ### Tested versions
v4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 31.0.15.4630) - AMD Ryzen 7 5800H with Radeon Graphics (16 Threads)
### Issue description
When multiple windows have been created in 4.3 using the 'Customize Run Instances' menu they often refuse to move when repositioned. Error occurs with an empty project, newly created with the 4.3 editor.
This machine is able to run 4 instances of a not-empty project without any problem in 4.2. Resource usage is not high while reproducing.
https://github.com/user-attachments/assets/71d7b2f0-36ba-46f2-8b7c-5ef5087ea4a6
### Steps to reproduce
In 4.3 stable, create a new project. In 'Debug' > 'Customize Run Instances' toggle 'Enable Multiple Instances' and set number to 4. Launch the project in the editor and try to move around the windows.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,confirmed | low | Critical |
2,508,254,979 | PowerToys | Souris sans frontières : problème | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
2 PC's : one desktop with 2 displays, 1 laptop.
Mouse pointer is not visible on 1 display
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,508,264,423 | go | runtime/pprof: block and mutex profile stacks sometimes have "gowrap" root frames | ### Go version
go version go1.23.0 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/nick.ripley/Library/Caches/go-build'
GOENV='/Users/nick.ripley/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/nick.ripley/go/pkg/mod'
GONOPROXY='redacted'
GONOSUMDB='redacted'
GOOS='darwin'
GOPATH='/Users/nick.ripley/go'
GOPRIVATE='redacted'
GOPROXY='redacted'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/nick.ripley/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/nick.ripley/sandbox/go/gowrap/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/f3/g91d13pd6kd3vdxts_gsgd1r0000gn/T/go-build334561956=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Ran the following program:
```go
package main
import (
"os"
"runtime"
"runtime/pprof"
"sync"
)
func block1(ch chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
ch <- struct{}{}
}
func block2(ch chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
<-ch
}
func main() {
runtime.SetBlockProfileRate(1)
ch := make(chan struct{})
var wg sync.WaitGroup
wg.Add(2)
go block1(ch, &wg)
go block2(ch, &wg)
wg.Wait()
pprof.Lookup("block").WriteTo(os.Stdout, 1)
}
```
Playground link: https://go.dev/play/p/f_80E-mywSz
EDIT: updated playground example that more clearly demonstrates how aggregation
changes with this issue: https://go.dev/play/p/2dMLkn2MOxt
### What did you see happen?
I saw a `gowrap` function in the root frame:
```
--- contention:
cycles/second=3158102000
352356 1 @ 0x476488 0x4cc711 0x4380eb 0x46f8e1
# 0x476487 sync.(*WaitGroup).Wait+0x47 /usr/local/go-faketime/src/sync/waitgroup.go:118
# 0x4cc710 main.main+0xf0 /tmp/sandbox2366213254/prog.go:27
# 0x4380ea runtime.main+0x28a /usr/local/go-faketime/src/runtime/proc.go:272
2512 1 @ 0x406312 0x4cc565 0x4cc785 0x46f8e1
# 0x406311 runtime.chanrecv1+0x11 /usr/local/go-faketime/src/runtime/chan.go:489
# 0x4cc564 main.block2+0x44 /tmp/sandbox2366213254/prog.go:17
# 0x4cc784 main.main.gowrap2+0x24 /tmp/sandbox2366213254/prog.go:26
```
### What did you expect to see?
I thought it would look roughly similar to what it looked like with Go 1.22:
```
--- contention:
cycles/second=2272924666
384662 1 @ 0x46bea8 0x4b4e91 0x437eeb 0x465641
# 0x46bea7 sync.(*WaitGroup).Wait+0x47 /usr/local/go-faketime/src/sync/waitgroup.go:116
# 0x4b4e90 main.main+0xf0 /tmp/sandbox198902667/prog.go:27
# 0x437eea runtime.main+0x28a /usr/local/go-faketime/src/runtime/proc.go:271
4166 1 @ 0x406852 0x4b4ce7 0x465641
# 0x406851 runtime.chanrecv1+0x11 /usr/local/go-faketime/src/runtime/chan.go:442
# 0x4b4ce6 main.block2+0x46 /tmp/sandbox198902667/prog.go:17
```
This change wasn't intentional, and I think it would affect how stacks from `go foo()` calls at different call sites are aggregated. Probably changed due to https://go.dev/cl/598515. I'm not sure exactly where `main.main.gowrap2` would have come from. The stack for the mutex and block profile gets modified a few times ([here](https://cs.opensource.google/go/go/+/master:src/runtime/pprof/pprof.go;l=410;drc=db07c8607a1da5f618a7a8c2fae3e557dc6cb1af) and [here](https://cs.opensource.google/go/go/+/master:src/runtime/pprof/pprof.go;l=449;drc=db07c8607a1da5f618a7a8c2fae3e557dc6cb1af)) and I'm not sure if we're either failing to remove that wrapper or adding it back. We may also need to drop the wrapper frames at sample time. | NeedsInvestigation,compiler/runtime | low | Critical |
2,508,274,858 | flutter | [webview_flutter_wkwebview] Legacy Video Playback Integration tests hang on iOS | The legacy tests for inline video are failing to complete:
```
43:16 +45 ~1: ... Video playback policy Video plays inline when allowsInlineMediaPlayback is true
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Video playback policy Video plays inline when allowsInlineMediaPlayback is true - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Video playback policy Video plays full screen when allowsInlineMediaPlayback is false - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Video playback policy (tearDownAll) - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Audio playback policy (setUpAll) - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Audio playback policy Auto media playback - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Audio playback policy Changes to initialMediaPlaybackPolicy are ignored - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Audio playback policy (tearDownAll) - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: getTitle - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: Programmatic Scroll setAndGetScrollPosition - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: NavigationDelegate can allow requests - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: NavigationDelegate onWebResourceError - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: NavigationDelegate onWebResourceError is not called with valid url - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: NavigationDelegate onWebResourceError only called for main frame - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: NavigationDelegate can block requests - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: NavigationDelegate supports asynchronous decisions - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: launches with gestureNavigationEnabled on iOS - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: target _blank opens in same window - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: can open new window and go back - did not complete [E]
43:16 +45 ~1: /Volumes/Work/s/w/ir/x/w/packages/packages/webview_flutter/webview_flutter_wkwebview/example/integration_test/legacy/webview_flutter_test.dart: (tearDownAll) - did not complete [E]
43:16 +45 ~1: Some tests failed.
[packages/webview_flutter/webview_flutter_wkwebview completed in 43m 22s]
```
It seems only the legacy tests fail while the same test with the public API still passes.
I was able to reproduce it locally as well. | team,platform-ios,p: webview,package,P3,c: disabled test,team-ios,triaged-ios | low | Critical |
2,508,322,022 | vscode | "Insert into Terminal" button on chat window's shell code blocks is hard to discover |
Type: <b>Feature Request</b>
In the chat window, when the chat handler responds with a shell code block, you can mouse over it and click "Insert into Terminal". This is great but hard to discover--it requires the mouse over. This may also be an accessibility issue.
We'd like to see a permanent button for inserting into terminal (and probably a corresponding button for code blocks' insert into editor).

VS Code version: Code - Insiders 1.93.0-insider (4849ca9bdf9666755eb463db297b69e5385090e3, 2024-09-04T13:13:15.344Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | under-discussion,panel-chat | low | Major |
2,508,338,185 | pytorch | torch.library.custom_op: should be able to pass in a tags argument to specify tags | Or maybe a higher-level API for layout
cc @ezyang @chauhang @penguinwu @bdhirsh | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Minor |
2,508,363,946 | go | net: TestCloseRead failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestCloseRead"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737618629868254401)):
=== RUN TestCloseRead
=== PAUSE TestCloseRead
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,508,376,300 | next.js | generateStaticParams validation makes react-router migrations difficult | ### Link to the code that reproduces this issue
https://github.com/GuiBibeau/react-router-nextapp-reproduction
### To Reproduce
While following the [tutorial to migrate](https://nextjs.org/docs/app/building-your-application/upgrading/from-vite) from a Vite SPA (with react-router), I am able to build and render the application but hot reloading does not work.
1. clone repo, install dependency and start dev server
2. navigate to home page http://localhost:3000/
3. Click on `Data 1` link on the home page, SPA behavior will work as expected
4. refresh page or trigger hot reloading, page will break
### Current vs. Expected behavior
A first render of `next dev` will work. However regular workflows like hot reloading or refreshing the page will cause the validation from `generateStaticParams` will trigger. This makes an inplace migration from a vite react-router spa application difficult since it breaks regular dev workflows while the migration is happening.
It would be great to have away to turn off this error for a few routes in the next config or other ways.
### Provide environment information
```bash
- **Operating System:**
- Platform: darwin
- Architecture: arm64
- Version: Darwin Kernel Version 23.5.0
- Available memory: 16384 MB
- Available CPU cores: 8
- **Binaries:**
- Node: 20.12.2
- npm: 10.5.0
- Yarn: 1.22.22
- pnpm: 9.1.2
- **Relevant Packages:**
- Next.js: 14.2.8
- ESLint Config Next: 14.2.8
- React: 18.3.1
- React DOM: 18.3.1
- TypeScript: 5.5.4
- **Next.js Config:**
- Output: export
```
### Which area(s) are affected? (Select all that apply)
Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
We are currently on a vite + react-router SPA setup looking to gradually move to next export and then to full SSR hosting with server components and all the modern stuff. Our goal is to migrate gradually over 12-18 months. | bug | low | Critical |
2,508,390,614 | kubernetes | [flaky test] [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim | ### Which jobs are flaking?
pull-kubernetes-e2e-gce
### Which tests are flaking?
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
### Since when has it been flaking?
unknown
### Testgrid link
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/127096/pull-kubernetes-e2e-gce/1831734659084455936
### Reason for failure (if possible)
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/127096/pull-kubernetes-e2e-gce/1831734659084455936
```
{ failed [FAILED] client rate limiter Wait returned an error: context deadline exceeded
In [It] at: k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:580 @ 09/05/24 17:16:05.146
}
```
```
I0905 17:16:03.310459 64689 resource_quota.go:2228] resource persistentvolumeclaims, expected 1, actual 0
I0905 17:16:05.146348 64689 resource_quota.go:580] Unexpected error:
<*fmt.wrapError | 0xc001170b40>:
client rate limiter Wait returned an error: context deadline exceeded
{
msg: "client rate limiter Wait returned an error: context deadline exceeded",
err: <context.deadlineExceededError>{},
}
[FAILED] in [It] - k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:580 @ 09/05/24 17:16:05.146
```
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig api-machinery
| sig/storage,kind/flake,triage/accepted | low | Critical |
2,508,448,534 | vscode | Allow opening chat code block in full editor | Allow splitting a code block from panel chat into its own editor so the user can explore it further. Can likely be added as a `...` action in the code block menu | feature-request,panel-chat | low | Minor |
2,508,450,363 | vscode | Restart extension will not update settings | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Windows 11
Steps to Reproduce:
1. Install [this extension](https://mega.nz/file/bItUVYrK#KERRqXQDCYFEGlkcPddW1ceNvvW9qHJdlhHyKD8bBmc)
2. This extension contains a simple js that sends a notification containing a test setting value.
3. In this 0.0.1 version, the setting is not added, so as expected, it shows `undefined`.
4. Now install [0.0.2](https://mega.nz/file/yZ8VGQgZ#jb_fwVc6Rp89SjBpbh9OaB4gC-j-kaFKI4fh89Qm8ns)
5. In this version, I set a default value 1 to test setting. The code prompts to restart extension, so restart. However, after the restart, the message still shows `undefined`.
6. Restart vscode, now the notification shows the correct default value 1 | extensions,config,under-discussion | low | Critical |
2,508,458,626 | opencv | Using Charuco Board to get camera intrinsics in opencv-python | ### Describe the doc issue
Hi, I am having issues finding updated documentation on the process for getting calibration matrix and dist values in opencv using python. I am a bit new to this process as a whole and was hoping for some guidance and help.
Here I have attached the script I have been trying to build, this is what I have come up with so far with opencv 4.10, however there are still some function calls that don't seem to exist in Aruco anymore. Any help would be appreciated or potentially a sample script would be great!
`import cv2
import os
import numpy as np
import logging
import matplotlib.pyplot as plt
import matplotlib as mpl
from cv2 import aruco
#set logging level to info or above
logging.basicConfig(level=logging.INFO)
#set output path
output = r"C:\Users\patrnih1\Documents\Python_Scripts\Intrinsics\output.txt"
# ------------------------------
# ENTER YOUR REQUIREMENTS HERE:
class ArucoError(Exception):
pass
def create_aruco(columns, rows, square_length, marker_length, dictionary):
if dictionary == 6:
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)
board = cv2.aruco.CharucoBoard((columns, rows), square_length, marker_length, aruco_dict)
elif dictionary == 4:
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_4X4_50)
board = cv2.aruco.CharucoBoard((columns, rows), square_length, marker_length, aruco_dict)
elif dictionary == 5:
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_5X5_250)
board = cv2.aruco.CharucoBoard((columns, rows), square_length, marker_length, aruco_dict)
elif dictionary == 7:
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_7X7_250)
board = cv2.aruco.CharucoBoard((columns, rows), square_length, marker_length, aruco_dict)
else:
raise ArucoError("Invalid dictionary")
pass
imboard = board.generateImage((2000, 200))
cv2.imwrite(r"C:\Users\patrnih1\Documents\Python_Scripts\Intrinsics\charuco.png", imboard)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
plt.imshow(imboard, cmap=mpl.colormaps['Greys'], interpolation="nearest")
ax.axis("off")
plt.show()
return aruco_dict, board
# ...
PATH_TO_YOUR_IMAGES = '/Users/Ed/Downloads/Calibration_Images'
# ------------------------------
def calibrate_and_save_parameters(ARUCO_DICT, board):
# Define the aruco dictionary and charuco board
SQUARES_VERTICALLY = board.size[0]
SQUARES_HORIZONTALLY = board.size[1]
SQUARE_LENGTH = board.squareLength
MARKER_LENGTH = board.markerLength
dictionary = cv2.aruco.getPredefinedDictionary(ARUCO_DICT)
board = cv2.aruco.CharucoBoard((SQUARES_VERTICALLY, SQUARES_HORIZONTALLY), SQUARE_LENGTH, MARKER_LENGTH, dictionary)
params = cv2.aruco.DetectorParameters()
# Load PNG images from folder
image_files = [os.path.join(PATH_TO_YOUR_IMAGES, f) for f in os.listdir(PATH_TO_YOUR_IMAGES) if f.endswith(".png")]
image_files.sort() # Ensure files are in order
all_charuco_corners = []
all_charuco_ids = []
for image_file in image_files:
image = cv2.imread(image_file)
image_copy = image.copy()
marker_corners, marker_ids, _ = cv2.aruco.detectMarkers(image, dictionary, parameters=params)
# If at least one marker is detected
if len(marker_ids) > 0:
cv2.aruco.drawDetectedMarkers(image_copy, marker_corners, marker_ids)
charuco_retval, charuco_corners, charuco_ids = cv2.aruco.interpolateCornersCharuco(marker_corners, marker_ids, image, board)
if charuco_retval:
all_charuco_corners.append(charuco_corners)
all_charuco_ids.append(charuco_ids)
# Calibrate camera
retval, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.aruco.calibrateCameraCharuco(all_charuco_corners, all_charuco_ids, board, image.shape[:2], None, None)
# Save calibration data
np.save('camera_matrix.npy', camera_matrix)
np.save('dist_coeffs.npy', dist_coeffs)
fx = camera_matrix[0, 0]
fy = camera_matrix[1, 1]
cx = camera_matrix[0, 2]
cy = camera_matrix[1, 2]
distortion = dist_coeffs.flatten()
with open(output, "w") as f:
f.write("fx: {0}\n".format(fx))
logging.info("fx: {0}".format(fx))
f.write("fy: {0}\n".format(fy))
logging.info("fy: {0}".format(fy))
f.write("cx: {0}\n".format(cx))
logging.info("cx: {0}".format(cx))
f.write("cy: {0}\n".format(cy))
logging.info("cy: {0}".format(cy))
for k in range(len(distortion)):
f.write("k{0}: {1}\n".format(k+1, distortion[k]))
logging.info("k{0}: {1}".format(k+1, distortion[k]))
# Iterate through displaying all the images
for image_file in image_files:
image = cv2.imread(image_file)
undistorted_image = cv2.undistort(image, camera_matrix, dist_coeffs)
cv2.imshow('Undistorted Image', undistorted_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
calibrate_and_save_parameters()`
### Fix suggestion
_No response_ | question (invalid tracker) | low | Critical |
2,508,480,725 | TypeScript | Inferred project that currently use "current directory of tsserver host" needs some special handling | ### 🔍 Search Terms
Inferred project, tsserver current directory
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Inferred projects that use tsserver's current directory (eg project root was not specified and is dynamic file or is single inferred project for all loose files). Esp for all the dynamic files (untitled files etc) this means we lookup package.json and files in tsserver current directory (which in most cases is vscode installation location) that seems undesirable but it also involves how to reference these dynamic files and needs some rethinking. it also ends up looking for `node_modules/@types` for auto type directives as well.
More prominent with #59844 and some of the tests added at https://github.com/microsoft/TypeScript/pull/59869/files#diff-d3671512f8dff77d5eabeb5694d7eae5e9b94e4d50a066c3e0205981fc102e22R59 show this as well.
### 📃 Motivating Example
Incorrect and unnecessary disk references to non existent files and watching those locations
### 💻 Use Cases
1. What do you want to use this for?
2. What shortcomings exist with current approaches?
3. What workarounds are you using in the meantime?
| Needs Investigation | low | Minor |
2,508,486,774 | pytorch | DISABLED test_allocator_fuzz (__main__.TestCudaMallocAsync) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_allocator_fuzz&suite=TestCudaMallocAsync&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29729583895).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_allocator_fuzz`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 3859, in test_allocator_fuzz
action()
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 3851, in free
assert torch.all(v == x)
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCudaMallocAsync.test_allocator_fuzz
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,508,486,853 | pytorch | DISABLED test_embedding_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_embedding_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29728676549).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_embedding_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 4857, in test_embedding
self.common(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 430, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 424, in run
def run(*ex, **kwargs):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_embedding_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,508,493,550 | pytorch | Log missed optimization opportunities due to dynamic shapes in Inductor | ### 🐛 Describe the bug
Inductor has some optimizations that it applies only when something is statically known to be true (e.g., it will not introduce a guard to get the optimization). We should log these so that it is possible for a motivated user to see where they could usefully introduce extra guards to unlock more optimizations at the cost of more compile products.
### Versions
main
cc @chauhang @penguinwu | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,508,507,111 | PowerToys | PowerToys Run is slow at looking up VS Code workspaces | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Open run box, press `{` key to filter to VS Code workspaces, type the name of one, and observe how it takes 1-2 seconds to display results
### ✔️ Expected Behavior
Much faster results (like it used to be)
### ❌ Actual Behavior
Slow results
### Other Software
VS Code Insiders version info:
```
Version: 1.93.0-insider (system setup)
Commit: 4849ca9bdf9666755eb463db297b69e5385090e3
Date: 2024-09-04T13:13:15.344Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Windows_NT x64 10.0.22631
``` | Issue-Bug,Product-PowerToys Run,Needs-Triage,Run-Plugin | low | Major |
2,508,523,576 | godot | HTTPClient SSE requests (text/event-stream): read_response_body_chunk() fails | ### Tested versions
4.3.stable.official
### System information
Windows 11 64-bit 4.3.stable.official
### Issue description
If you try to make a SSE requests (text/event-stream) with the object `HTTPClient`, the connection is opened but then `read_response_body_chunk()` fails to return because the object never enters the status `STATUS_BODY`.
My API is fine because I've tested it in Python with the `requests` library and it does work perfectly.
### Steps to reproduce
```gdscript
if httpclient_has_response or httpclient_status == HTTPClient.STATUS_BODY:
var headers = httpclient.get_response_headers_as_dictionary()
httpclient.poll()
var chunk = httpclient.read_response_body_chunk()
```
Error:
```
Condition "status != STATUS_BODY" is true. Returning: PackedByteArray()
<C++ Source> core/io/http_client_tcp.cpp:572 @ read_response_body_chunk()```
```
### Minimal reproduction project (MRP)
Try e.g. https://github.com/gxian/HTTPSSEClient
Apparently it was working on old versions of Godot 4? | bug,topic:network | low | Critical |
2,508,527,252 | kubernetes | [flaky test] : [It] [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data | ### Which jobs are flaking?
pull-kubernetes-e2e-gce
### Which tests are flaking?
[sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
### Since when has it been flaking?
unknown
### Testgrid link
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/127096/pull-kubernetes-e2e-gce/1831749638269440000
### Reason for failure (if possible)
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/127096/pull-kubernetes-e2e-gce/1831749638269440000
```
{ failed [FAILED] "test -b /opt/0" should fail with exit code 1, but failed with error message "Timeout occurred"
stdout:
stderr: : Timeout occurred
In [It] at: k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:704 @ 09/05/24
```
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig storage | sig/storage,kind/flake,lifecycle/rotten,needs-triage | low | Critical |
2,508,582,499 | ui | [bug]: deprecated incoming | ### Describe the bug
Console:
XAxis: Support for defaultProps will be removed from function components in a future major release. Use JavaScript default parameters instead.
YAxis: Support for defaultProps will be removed from function components in a future major release. Use JavaScript default parameters instead.
### Affected component/components
Some charts ex: Bar Chart
### How to reproduce
```
"use client"
import { TrendingUp } from "lucide-react"
import { Bar, BarChart, CartesianGrid, XAxis } from "recharts"
import {
Card,
CardContent,
CardDescription,
CardFooter,
CardHeader,
CardTitle,
} from "@/components/ui/card"
import {
ChartConfig,
ChartContainer,
ChartTooltip,
ChartTooltipContent,
} from "@/components/ui/chart"
export const description = "A bar chart"
const chartData = [
{ month: "January", desktop: 186 },
{ month: "February", desktop: 305 },
{ month: "March", desktop: 237 },
{ month: "April", desktop: 73 },
{ month: "May", desktop: 209 },
{ month: "June", desktop: 214 },
]
const chartConfig = {
desktop: {
label: "Desktop",
color: "hsl(var(--chart-1))",
},
} satisfies ChartConfig
export function Component() {
return (
<Card>
<CardHeader>
<CardTitle>Bar Chart</CardTitle>
<CardDescription>January - June 2024</CardDescription>
</CardHeader>
<CardContent>
<ChartContainer config={chartConfig}>
<BarChart accessibilityLayer data={chartData}>
<CartesianGrid vertical={false} />
<XAxis
dataKey="month"
tickLine={false}
tickMargin={10}
axisLine={false}
tickFormatter={(value) => value.slice(0, 3)}
/>
<ChartTooltip
cursor={false}
content={<ChartTooltipContent hideLabel />}
/>
<Bar dataKey="desktop" fill="var(--color-desktop)" radius={8} />
</BarChart>
</ChartContainer>
</CardContent>
<CardFooter className="flex-col items-start gap-2 text-sm">
<div className="flex gap-2 font-medium leading-none">
Trending up by 5.2% this month <TrendingUp className="h-4 w-4" />
</div>
<div className="leading-none text-muted-foreground">
Showing total visitors for the last 6 months
</div>
</CardFooter>
</Card>
)
}
```

### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11
Nextjs version: "next": "14.2.5" No TS
pkg.json:
{
"dependencies": {
"@hookform/resolvers": "^3.9.0",
"@radix-ui/react-accordion": "^1.2.0",
"@radix-ui/react-alert-dialog": "^1.1.1",
"@radix-ui/react-aspect-ratio": "^1.1.0",
"@radix-ui/react-avatar": "^1.1.0",
"@radix-ui/react-checkbox": "^1.1.1",
"@radix-ui/react-collapsible": "^1.1.0",
"@radix-ui/react-context-menu": "^2.2.1",
"@radix-ui/react-dialog": "^1.1.1",
"@radix-ui/react-dropdown-menu": "^2.1.1",
"@radix-ui/react-hover-card": "^1.1.1",
"@radix-ui/react-icons": "^1.3.0",
"@radix-ui/react-label": "^2.1.0",
"@radix-ui/react-menubar": "^1.1.1",
"@radix-ui/react-navigation-menu": "^1.2.0",
"@radix-ui/react-popover": "^1.1.1",
"@radix-ui/react-progress": "^1.1.0",
"@radix-ui/react-radio-group": "^1.2.0",
"@radix-ui/react-scroll-area": "^1.1.0",
"@radix-ui/react-select": "^2.1.1",
"@radix-ui/react-separator": "^1.1.0",
"@radix-ui/react-slider": "^1.2.0",
"@radix-ui/react-slot": "^1.1.0",
"@radix-ui/react-switch": "^1.1.0",
"@radix-ui/react-tabs": "^1.1.0",
"@radix-ui/react-toast": "^1.2.1",
"@radix-ui/react-toggle": "^1.1.0",
"@radix-ui/react-toggle-group": "^1.1.0",
"@radix-ui/react-tooltip": "^1.1.2",
"axios": "^1.7.4",
"canvas-confetti": "^1.9.3",
"class-variance-authority": "^0.7.0",
"clsx": "^2.1.1",
"cmdk": "^0.2.1",
"date-fns": "^3.6.0",
"embla-carousel-react": "^8.1.7",
"fs": "^0.0.1-security",
"geist": "^1.3.1",
"input-otp": "^1.2.4",
"js-cookie": "^3.0.5",
"lucide-react": "^0.417.0",
"next": "14.2.5",
"next-themes": "^0.3.0",
"nextjs-toploader": "^1.6.12",
"nookies": "^2.5.2",
"react": "^18",
"react-day-picker": "^8.10.1",
"react-dom": "^18",
"react-hook-form": "^7.52.2",
"react-icons": "^5.2.1",
"react-resizable-panels": "^2.0.22",
"recharts": "^2.12.7",
"sonner": "^1.5.0",
"tailwind-merge": "^2.4.0",
"tailwindcss-animate": "^1.0.7",
"vaul": "^0.9.1",
"winston": "^3.14.2",
"zod": "^3.23.8"
},
"devDependencies": {
"eslint": "^8",
"eslint-config-next": "14.2.5",
"postcss": "^8",
"tailwindcss": "^3.4.1"
}
}
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,508,614,795 | kubernetes | [FG:InPlacePodVerticalScaling] Emit a events when resize status changes | /kind feature
To help with debugging in place pod resize, the Kubelet should emit events along with various resize status changes:
1. Resize accepted: report the resource deltas
2. Resize infeasible: which resources were over capacity
3. Resize deferred: which resources were over available capacity
5. Resize completed
6. Resize error
/sig node
/priority important-longterm
/milestone v1.32
/triage accepted | sig/node,kind/feature,priority/important-longterm,triage/accepted | low | Critical |
2,508,646,947 | bitcoin | Trying to run bitcoin qt on Windows and getting an AV | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behaviour
********* Start testing of AppTests *********
Config: Using QtTest library 5.15.10, Qt 5.15.10 (x86_64-little_endian-llp64 shared (dynamic) debug build; by MSVC 2022), windows 11
PASS : AppTests::initTestCase()
QINFO : AppTests::appTests() Backing up GUI settings to "C:\\Users\\danilog\\AppData\\Local\\Temp\\test_common_Bitcoin Core\\13cec4b43759d5d6b469fa4cd0b3ea34c2e956e50f2acca5a4f7af1934ed2746\\regtest\\guisettings.ini.bak"
QDEBUG : AppTests::appTests() requestInitialize : Requesting initialize
QDEBUG : AppTests::appTests() Running initialization in thread
QDEBUG : AppTests::appTests() initializeResult : Initialization result: true
QINFO : AppTests::appTests() Platform customization: "windows"
QWARN : AppTests::appTests() This plugin does not support propagateSizeHints()
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 5 (MENU). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 17 (STATUS). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 17 (STATUS). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 17 (STATUS). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 14 (TRACKBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 14 (TRACKBAR). (The handle is invalid.)
QWARN : AppTests::appTests() This plugin does not support propagateSizeHints()
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QWARN : AppTests::appTests() This plugin does not support raise()
QWARN : AppTests::appTests() This plugin does not support raise()
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 4 (LISTVIEW). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 4 (LISTVIEW). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 8 (SCROLLBAR). (The handle is invalid.)
QWARN : AppTests::appTests() This plugin does not support grabbing the keyboard
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 12 (TOOLBAR). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 17 (STATUS). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 17 (STATUS). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 17 (STATUS). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QSYSTEM: AppTests::appTests() OpenThemeData() failed for theme 10 (TAB). (The handle is invalid.)
QWARN : AppTests::appTests() This plugin does not support propagateSizeHints()
QDEBUG : AppTests::appTests() requestShutdown : Requesting shutdown
QDEBUG : AppTests::appTests() Running Shutdown in thread
QDEBUG : AppTests::appTests() Shutdown finished
PASS : AppTests::appTests()
PASS : AppTests::cleanupTestCase()
Totals: 3 passed, 0 failed, 0 skipped, 0 blacklisted, 2867ms
********* Finished testing of AppTests *********
********* Start testing of OptionTests *********
Config: Using QtTest library 5.15.10, Qt 5.15.10 (x86_64-little_endian-llp64 shared (dynamic) debug build; by MSVC 2022), windows 11
PASS : OptionTests::initTestCase()
PASS : OptionTests::migrateSettings()
PASS : OptionTests::integerGetArgBug()
PASS : OptionTests::parametersInteraction()
PASS : OptionTests::extractFilter()
PASS : OptionTests::cleanupTestCase()
Totals: 6 passed, 0 failed, 0 skipped, 0 blacklisted, 54ms
********* Finished testing of OptionTests *********
********* Start testing of URITests *********
Config: Using QtTest library 5.15.10, Qt 5.15.10 (x86_64-little_endian-llp64 shared (dynamic) debug build; by MSVC 2022), windows 11
PASS : URITests::initTestCase()
PASS : URITests::uriTests()
PASS : URITests::cleanupTestCase()
Totals: 3 passed, 0 failed, 0 skipped, 0 blacklisted, 1ms
********* Finished testing of URITests *********
********* Start testing of RPCNestedTests *********
Config: Using QtTest library 5.15.10, Qt 5.15.10 (x86_64-little_endian-llp64 shared (dynamic) debug build; by MSVC 2022), windows 11
PASS : RPCNestedTests::initTestCase()
PASS : RPCNestedTests::rpcNestedTests()
PASS : RPCNestedTests::cleanupTestCase()
Totals: 3 passed, 0 failed, 0 skipped, 0 blacklisted, 227ms
********* Finished testing of RPCNestedTests *********
********* Start testing of WalletTests *********
Config: Using QtTest library 5.15.10, Qt 5.15.10 (x86_64-little_endian-llp64 shared (dynamic) debug build; by MSVC 2022), windows 11
PASS : WalletTests::initTestCase()
QDEBUG : WalletTests::walletTests() NotifyUnload
QWARN : WalletTests::walletTests() This plugin does not support propagateSizeHints()
### Expected behaviour
to not crash
### Steps to reproduce
Follow the MSVC compilation process on Windows 11 23H2
### Relevant log output
00 Qt5Widgetsd!qt_getWindowsSystemMenu
01 Qt5Widgetsd!QMessageBox::showEvent
02 Qt5Widgetsd!QWidget::event
03 Qt5Widgetsd!QMessageBox::event
04 Qt5Widgetsd!QApplicationPrivate::notify_helper
05 Qt5Widgetsd!QApplication::notify
06 Qt5Cored!QCoreApplication::notifyInternal2
07 Qt5Cored!QCoreApplication::sendEvent
08 Qt5Widgetsd!QWidgetPrivate::show_helper
09 Qt5Widgetsd!QWidgetPrivate::setVisible
0a Qt5Widgetsd!QWidget::setVisible
0b Qt5Widgetsd!QDialog::setVisible
0c Qt5Widgetsd!QWidget::show
0d Qt5Widgetsd!QDialog::exec
0e test_bitcoin_qt!SendConfirmationDialog::exec
0f test_bitcoin_qt!SendCoinsDialog::sendButtonClicked
10 test_bitcoin_qt!SendCoinsDialog::qt_static_metacall
11 Qt5Cored!QMetaMethod::invoke
12 Qt5Cored!QMetaObject::invokeMethod
13 Qt5Cored!QMetaObject::invokeMethod
14 test_bitcoin_qt!`anonymous namespace'::SendCoins
15 test_bitcoin_qt!`anonymous namespace'::TestGUI
16 test_bitcoin_qt!`anonymous namespace'::TestGUI
17 test_bitcoin_qt!WalletTests::walletTests
18 test_bitcoin_qt!WalletTests::qt_static_metacall
19 Qt5Cored!QMetaMethod::invoke
1a Qt5Cored!QMetaMethod::invoke
1b Qt5Testd!QTest::TestMethods::invokeTestOnData
1c Qt5Testd!QTest::TestMethods::invokeTest
1d Qt5Testd!QTest::TestMethods::invokeTests
1e Qt5Testd!QTest::qRun
1f Qt5Testd!QTest::qExec
20 test_bitcoin_qt!main
21 test_bitcoin_qt!invoke_main
22 test_bitcoin_qt!__scrt_common_main_seh
23 test_bitcoin_qt!__scrt_common_main
24 test_bitcoin_qt!mainCRTStartup
25 KERNEL32!BaseThreadInitThunk
26 ntdll!RtlUserThreadStart
0:000> r
rax=0000000000000000 rbx=0000000000000000 rcx=0000015726c5f400
rdx=0000000000000000 rsi=00000042021f8eb0 rdi=00000042021f5b50
rip=00007ffc5a7d0e81 rsp=00000042021f5a70 rbp=0000000000000000
r8=000001572cdaf901 r9=0000000000000001 r10=0000015724fe0000
r11=00000042021f54e0 r12=0000000000000000 r13=0000000000000000
r14=0000000000000000 r15=0000000000000000
iopl=0 nv up ei pl nz na po nc
cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206
Qt5Widgetsd!qt_getWindowsSystemMenu+0x31:
00007ffc`5a7d0e81 488b00 mov rax,qword ptr [rax] ds:00000000`00000000=????????????????
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
latest master
### Operating system and version
Windows 11 23H2
### Machine specifications
_No response_ | Windows,Tests | low | Critical |
2,508,688,797 | ollama | on ollama.com's profile settings page , email addr shown mangled | ### What is the issue?
on that page , where edit name and bio
the after top header , there is first shown :
username \n
email addr
of the logged in user
then there are the editfields
anyway at my email , gmail , ends with 7 , .. that 7 isnt displayes there on the page
just the addr without the ending 7
greets
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug,ollama.com | low | Minor |
2,508,714,572 | pytorch | First all_to_all_single / barrier takes a large amount of time. | ### 🐛 Describe the bug
I'm getting profiles like this:
<img width="1224" alt="Screenshot 2024-09-05 at 1 16 44 PM" src="https://github.com/user-attachments/assets/6785039e-c902-42b0-bc35-9196ce1d85e9">
The first `all_to_all_single` / `barrier` are taking large amounts of time (and invoking thousands of `cudaMemcpyAsync`, `cudaGetDeviceCount`, and more), but subsequent `torch.distributed` operations are near instantaneous.
Here's a reproducible example:
```
import os
import torch
import torch.distributed as dist
os.environ["CUDA_VISIBLE_DEVICES"] = os.environ["LOCAL_RANK"]
print(f"set cuda visible devices to {os.environ['CUDA_VISIBLE_DEVICES']}")
def main():
# Initialize the distributed environment
dist.init_process_group(backend='nccl')
# Get the world size and rank of the current process
world_size = dist.get_world_size()
rank = dist.get_rank()
# Create a new process group
pg = dist.new_group(ranks=list(range(world_size)))
# Create input tensor with unique values for each rank
input_tensor = torch.tensor([rank * 10 + i for i in range(world_size)], dtype=torch.float32).cuda()
# Create output tensor to store the result
output_tensor = torch.zeros(world_size, dtype=torch.float32).cuda()
# Synchronize before starting the timer
torch.cuda.synchronize()
# Create CUDA events for timing
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
# Perform timed all-to-all communication for 2 iterations
print("Timing ALL_TO_ALL_SINGLE for 2 iterations...")
for iteration in range(2):
# Start timing
start_event.record()
# Perform all-to-all communication
# dist.all_to_all_single(output_tensor, input_tensor, group=pg)
dist.barrier(pg)
# End timing
end_event.record()
# Synchronize to ensure the operation is complete
torch.cuda.synchronize()
# Calculate elapsed time
elapsed_time = start_event.elapsed_time(end_event) / 1000 # Convert to seconds
# Print timing results
print(f"Rank {rank}: Iteration {iteration + 1} time: {elapsed_time:.6f} seconds")
# Print the final result for each rank
print(f"Rank {rank}: Input: {input_tensor}, Output: {output_tensor}")
# Assert that the result is correct
expected = torch.tensor([i * 10 + rank for i in range(world_size)], dtype=torch.float32).cuda()
assert torch.all(output_tensor.eq(expected)), f"Rank {rank}: Result does not match expected values"
# Clean up the distributed environment
dist.destroy_process_group(pg)
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
run it with:
```
torchrun --nproc_per_node=4 --standalone -m script_name
```
It should give output like:
```
set cuda visible devices to 0
set cuda visible devices to 2
set cuda visible devices to 3
set cuda visible devices to 1
Timing ALL_TO_ALL_SINGLE for 2 iterations...
Timing ALL_TO_ALL_SINGLE for 2 iterations...
Timing ALL_TO_ALL_SINGLE for 2 iterations...
Timing ALL_TO_ALL_SINGLE for 2 iterations...
Rank 2: Iteration 1 time: 6.860195 secondsRank 3: Iteration 1 time: 6.856339 seconds
Rank 0: Iteration 1 time: 6.771554 seconds
Rank 1: Iteration 1 time: 6.891210 seconds
Rank 2: Iteration 2 time: 0.000255 seconds
Rank 1: Iteration 2 time: 0.000166 seconds
Rank 0: Iteration 2 time: 0.000229 secondsRank 3: Iteration 2 time: 0.000307 seconds
Rank 0: Input: tensor([0., 1., 2., 3.], device='cuda:0'), Output: tensor([ 0., 10., 20., 30.], device='cuda:0')
Rank 3: Input: tensor([30., 31., 32., 33.], device='cuda:0'), Output: tensor([ 3., 13., 23., 33.], device='cuda:0')
Rank 1: Input: tensor([10., 11., 12., 13.], device='cuda:0'), Output: tensor([ 1., 11., 21., 31.], device='cuda:0')
Rank 2: Input: tensor([20., 21., 22., 23.], device='cuda:0'), Output: tensor([ 2., 12., 22., 32.], device='cuda:0')
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-112-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 176
On-line CPU(s) list: 0-175
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468V
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 44
Socket(s): 2
Stepping: 8
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.1 MiB (88 instances)
L1i cache: 2.8 MiB (88 instances)
L2 cache: 176 MiB (88 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-87
NUMA node1 CPU(s): 88-175
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] lion-pytorch==0.1.4
[pip3] numpy==1.26.4
[pip3] onnx==1.15.0
[pip3] onnxruntime==1.17.0
[pip3] optree==0.11.0
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] stlpips_pytorch==0.0.2
[pip3] torch==2.4.0+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torch-optimizer==0.3.0
[pip3] torch-summary==1.4.5
[pip3] torchaudio==2.4.0+cu124
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.3.0.post0
[pip3] torchvision==0.19.0+cu124
[pip3] triton==3.0.0
[conda] Could not collect
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,508,740,342 | storybook | [Documentation]: Storybook backend not found when using static build but works in development | ### Describe the problem
Hello. I'm working on an addon that accesses Storybook's backend through a middleware.js file to make requests to external sources. This works perfectly for development, but upon building the static build, any requests made in the frontend give a 404 error and aren't ever reaching the backend. I'm wondering if there is an alternative undocumented method to make requests that I'm unaware of. As the Storybook Designs addon seems to make requests without a middleware.js file. Any help is much appreciated! Thanks!
### Additional context
_No response_ | documentation,needs triage | low | Critical |
2,508,771,708 | vscode | SCM Graph - support incoming/outgoing changes | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- Version: 1.93.0
- Commit: 4849ca9bdf9666755eb463db297b69e5385090e3
- Date: 2024-09-04T13:02:38.431Z
- Electron: 30.4.0
- ElectronBuildId: 10073054
- Chromium: 124.0.6367.243
- Node.js: 20.15.1
- V8: 12.4.254.20-electron.0
- OS: Linux x64 6.8.0-40-generic snap
Steps to Reproduce:
1. Use 1.93.0
2. Have incoming changes in a source control repository
On 1.92.0 and earlier, when the experimental source control graph/history was disabled (`"scm.showHistoryGraph": false`), it was possible to view incoming changes just as easily as outgoing / staged / unstaged changes. Now that the source control graph is no longer optional, it is not possible to view them in the same way. This seems quite unnecessary to remove, when the source control graph panel itself can be hidden or at least moved out of view. Furthermore, outgoing changes are still viewable with the same interface, so it makes sense to have incoming changes similarly visible.
A repository with incoming changes, but not shown:

A repository with outgoing changes, which are shown:

| feature-request,scm | medium | Critical |
2,508,778,040 | tauri | [bug] Call to Dynamic Update Server needs to URL encode + in version number | ### Describe the bug
If you have a Dynamic Update Server defined like this:
```
"endpoints": [ "https://api.mydomain.com/update?current_version={{current_version}}&target={{target}}&arch={{arch}}" ],
```
And a semver version with build number like this: `1.8.0+1`, the request will get sent like this:
```
GET /update?current_version=1.8.0+1&target=windows&arch=x86_64
```
This will cause the `+` to be converted into a blank space in most servers. Therefore you get an invalid semver of `1.8.0 1`. I believe it should be sent like this:
```
GET /update?current_version=1.8.0%2B1&target=windows&arch=x86_64
```
### Reproduction
_No response_
### Expected behavior
The ability to verify with an update server using semver build numbers.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 X64
✔ WebView2: 128.0.2739.63
✔ MSVC: Visual Studio Build Tools 2022
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.11.0
- npm: 10.8.1
[-] Packages
- tauri [RUST]: 1.7.2
- tauri-build [RUST]: 1.5.4
- wry [RUST]: 0.24.11
- tao [RUST]: 0.16.5
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.1
[-] App
```
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,508,815,086 | electron | [Bug]: resizable=false window feature makes windows not resizable though that's not in the spec anymore | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
`window.open(url, name, "resizable=false")` should create a resizable window
### Actual Behavior
`window.open(url, name, "resizable=false")` creates a window that isn't resizable
### Testcase Gist URL
_No response_
### Additional Information
The modern [spec](https://html.spec.whatwg.org/multipage/nav-history-apis.html#dom-open-dev) doesn't mention this feature besides saying it controls whether a page opens in a popup (vs in a tab).
[This comment](https://github.com/whatwg/html/issues/2464#issuecomment-289168412) says:
>Many of the other window features that Chrome used to check (resizable, menubar, status, scrollbars) are things that aren't configurable anymore: popups are always resizable
I think this used to be a supported feature and then got removed, so Electron should probably stop respecting it.
(can observe this in the default Fiddle by opening devtools and running `window.open("https://example.com", "", "resizable=false");`) | platform/windows,bug :beetle: | low | Critical |
2,508,915,844 | pytorch | [export] is_fx_tracing evaluates to true when exporting non-strict | ### 🐛 Describe the bug
Symbolic trace's `is_fx_tracing` flag evaluates to `True` when tracing with non-strict export, which causes some errors when tracing NJT. The actual code which is causing issues is [here](https://github.com/pytorch/pytorch/blob/058a69d91a38f0efaf64c8b964b275dc7e1c65d0/torch/nested/__init__.py#L389-L394), but here's a simple repro:
```
def test_is_fx_tracing(self):
from torch.fx._symbolic_trace import is_fx_tracing
class M(torch.nn.Module):
def forward(self, x):
if is_fx_tracing():
raise RuntimeError("moo")
else:
return x + x
ep = torch.export.export(M(), (torch.ones(3),), strict=False) # RuntimeError raised
```
A simple fix is to replace the `if is_fx_tracing()` with `if not torch.compiler.is_compiling() and is_fx_tracing()`, since the `not torch.compiler.is_compiling()` will evaluate to `False`, but I find it weird that `is_fx_tracing()` evaluates to `True`.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @YuqingJ
### Versions
main | triaged,oncall: pt2,oncall: export | low | Critical |
2,508,916,010 | godot | Dictionary.duplicate(true) still does not make a recursive deep copy | ### Tested versions
Personal experience: 4.3
I have also seen [documentation of this bug from over 4 years ago](https://github.com/godotengine/godot/issues/37162), as well as some [more recent reports](https://github.com/mphe/SmartShape2D/issues/32).
### System information
Windows 11
### Issue description
The documentation for the Dictionary class states "If deep is true, inner Dictionary and [Array](https://docs.godotengine.org/en/stable/classes/class_array.html#class-array) keys and values are also copied, **recursively**." for the duplicate method. However, the objects in dictionaries created via the Dictionary.duplicate(true) function still act as references to those in the duplicated dictionary, implying either that a shallow copy or a non-recursive deep copy was made, instead of the recursive deep copy proscribed by the documentation.
### Steps to reproduce
Create a dictionary. Put an array of objects passed by reference into the dictionary. Call duplicate(true) on the dictionary. Alter an object in the array in the result of the duplicate function. Print or otherwise display the object it corresponds to in the original dictionary. It will appear as if a shallow copy had been made, despite described behavior specifying a deep copy.
### Minimal reproduction project (MRP)
I do not currently have access to a Godot editor to make a MRP. I hope the issue is simple enough fort my description to be sufficient. | discussion,topic:core,documentation | low | Critical |
2,508,933,213 | flutter | Text Duplication Issue with Microsoft Translate Keyboard in Flutter Text Fields | ### Steps to reproduce
1.Create a basic Flutter app with a text field.
2.On a device with the Microsoft Translate keyboard installed, set it as the active keyboard.
3.Focus on the text field in the Flutter app.
4.Type any word or phrase using the Microsoft Translate keyboard.
5.Observe that the entered text appears twice in the text field.
### Expected results
The text should appear only once in the text field, as entered.
### Actual results
The text is duplicated, with each word or phrase appearing twice.
### Code sample
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: Text('Text Field Example'),
),
body: Center(
child: Padding(
padding: EdgeInsets.all(20.0),
child: TextField(
decoration: InputDecoration(
hintText: 'Enter text here',
border: OutlineInputBorder(),
),
),
),
),
),
);
}
}
```
### Screenshots or Video
video:
https://github.com/user-attachments/assets/004c5ff4-ebf7-450a-818b-4882c777451f
In this video, I'm using a subtitle editor app. When I want to translate text to any language using the Microsoft Translator keyboard, I encounter an issue. After selecting the text and clicking on translate, the translated text appears twice.
For example, this issue occurs when I translate from English to Kurdish Central, English to Arabic, or any other language combination.
Steps to reproduce:
1. Open the any app created in flutter have Text Field
2. Select a piece of text
3. Use the Microsoft Translator keyboard to translate the selected text
4. Observe that the translated text appears twice in the text field
This issue consistently occurs regardless of the source or target language.
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.19045.3570], locale en-US)
• Flutter version 3.24.1 on channel stable at C:\src\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (2 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2 | a: text input,platform-android,a: internationalization,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.25 | low | Major |
2,508,967,014 | pytorch | [Distributed] Non-0 ranks creating CUDA contexts on device 0 | ### 🐛 Describe the bug
Symptom:

Each non-0 rank is occupying ~ 1GB memory on GPU 0.
### Versions
Simple repro:
`torchrun --standalone --nproc-per-node 4 repro.py`
```
import torch
import os
import torch.distributed.distributed_c10d as c10d
def repro(rank, world_size):
device = torch.device("cuda:%d" % rank)
#torch.cuda.set_device(device)
c10d.init_process_group(
backend="nccl", rank=rank, world_size=world_size, device_id=device,
)
x = torch.ones((10,), device=device)
c10d.all_reduce(x)
c10d.destroy_process_group()
print("clean exit")
if __name__ == "__main__":
repro(int(os.environ["RANK"]), int(os.environ["WORLD_SIZE"]))
```
Note:
The issue can be avoided if user uncomments the `torch.cuda.set_device(device)` line. But this is not / has not been a requirement of distributed.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | high priority,triage review,oncall: distributed | low | Critical |
2,508,968,176 | PowerToys | Bug keyboard manager - Please help me | ### Microsoft PowerToys version
v084.0
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Hi, I have a problem and I really need help.
I recently changed my keyboard settings using the PowerToys keyboard manager on my Dell notebook (Windows 11). I changed the number keys on the right of the keyboard because I wanted to make it easier to play a game. I changed the Numpad 1, Numpad 2, Numpad 3 and Numpad 4 keys to the left, down, right and up keys, respectively. I did this because my direction pads are very small and I wanted greater comfort in gameplay. However, when I went to remove this remapping, the keys no longer returned, even though I deleted the settings, restarted my notebook, deleted PowerToy and installed it again, and nothing returned to normal, the keys did not return. What can I do? Please, help me.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,508,979,037 | vscode | `@builtin @enabled` doesn't work | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0
- OS Version: Windows 11
Steps to Reproduce:
1. goto to extensions view
2. type `@builtin @enabled`
Expected:
All _enabled_ builtin extensions to show up
Actual:
nothing

the same is also true for `@builtin @disabled` etc | feature-request,extensions | low | Critical |
2,508,988,794 | rust | `#[cfg(...)]` attribute is incorrectly removed from a function parameter inside of the derive macro | I tried this code:
```rust
// my_macro proc-macro crate
use use proc_macro::TokenStream;
#[proc_macro_derive(Derive)]
pub fn derive(input: TokenStream) -> TokenStream {
eprintln!("{input}");
TokenStream::default()
}
// lib.rs
#[derive(my_macro::Derive)]
enum Enum {
X = {
fn foo(#[cfg(any())] arg1: (), arg2: ()) {}
0
},
}
```
I'm not sure what exactly I expected to see in the `stderr` output when I compile the crate. Maybe this with the `#[cfg(...)]` attribute left intact on the function
```rust
enum Enum { X = { fn foo(#[cfg(any())] arg1: (), arg2: ()) {} 0 }, }
```
or this with the `#[cfg(...)]` attribute completely removed togther with the syntax it was placed on:
```rust
enum Enum { X = { fn foo(arg2: ()) {} 0 }, }
```
Instead, the output is invalid rust code:
```rust
enum Enum { X = { fn foo(, arg2: ()) {} 0 }, }
```
Notice that the `#[cfg(...)]` was removed, but... the coma after the function parameter was not removed. This output results in an invalid token tree not parsable by `syn`
### Meta
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
## Context
Why would I ever write such code? I'd like to support conditional compilation with my proc macro that generates a builder from a function called [`bon`](https://github.com/elastio/bon). But... since it's a proc macro attribute, the `#[cfg(...)]` and `#[cfg_attr(...)]` attributes aren't automatically removed when the macro runs.
So I thought I'd workaround it by delegating to a derive macro that accepts the function item in the expression position as the first item of the block for the default value of the enum's variant. This is because derive macros benefit from automatic expansion of `#[cfg(...)/cfg_attr(...)]` attributes before they run. Thus, by using a derive macro wrapper I could get the results of cfg evaluation this way.
I could work around that by passing the item as an argument to the proc-macro derive... But IDEs and Rust Analyzer don't work this way. They somehow mess up span information. | A-macros,T-compiler,C-bug,A-proc-macros,A-cfg | low | Minor |
2,509,002,890 | PowerToys | [Workspaces] Saving and loading File Explorer with tabs | ### Description of the new feature / enhancement
All tabs in the File Explorer should be saved and opened when launching the workspace.
### Scenario when this would be used?
Currently, Workspace is able to open multiple File Explorer instances. However, adding a feature to allow saving/loading File Explorer with (the user saved) tabs would be nice for organization or small screen setups.
### Supporting information
_No response_ | Needs-Triage,Tracker,Product-Workspaces | low | Minor |
2,509,033,847 | go | cmd/go: TestScript/cover_pkgall_imports failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/cover_pkgall_imports"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737623781897071233)):
=== RUN TestScript/cover_pkgall_imports
=== PAUSE TestScript/cover_pkgall_imports
=== CONT TestScript/cover_pkgall_imports
script_test.go:135: 2024-09-05T15:30:01Z
script_test.go:137: $WORK=/home/swarming/.swarming/w/ir/x/t/cmd-go-test-1520434874/tmpdir410918943/cover_pkgall_imports3152574167
script_test.go:159:
PATH=/home/swarming/.swarming/w/ir/x/t/cmd-go-test-1520434874/tmpdir410918943/testbin:/home/swarming/.swarming/w/ir/x/w/goroot/bin:/home/swarming/.swarming/w/ir/x/w/goroot/bin:/home/swarming/.swarming/w/ir/x/w/goroot/bin:/home/swarming/.swarming/w/ir/cache/tools/bin:/home/swarming/.swarming/w/ir/bbagent_utility_packages:/home/swarming/.swarming/w/ir/bbagent_utility_packages/bin:/home/swarming/.swarming/w/ir/cipd_bin_packages:/home/swarming/.swarming/w/ir/cipd_bin_packages/bin:/home/swarming/.swarming/w/ir/cipd_bin_packages/cpython3:/home/swarming/.swarming/w/ir/cipd_bin_packages/cpython3/bin:/home/swarming/.swarming/w/ir/cache/cipd_client:/home/swarming/.swarming/w/ir/cache/cipd_client/bin:/home/swarming/.swarming/cipd_cache/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOME=/no-home
CCACHE_DISABLE=1
GOARCH=amd64
...
panic: runtime error: slice bounds out of range [:4294969391] with capacity 33271
goroutine 1 gp=0xc0000081c0 m=0 mp=0x814900 [running]:
panic({0x645900?, 0xc0000180f0?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:804 +0x2a5 fp=0xc0000df1b8 sp=0xc0000df108 pc=0x4c49a5
runtime.goPanicSliceAcap(0x10000082f, 0x81f7)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:141 +0x97 fp=0xc0000df1f8 sp=0xc0000df1b8 pc=0x45c917
internal/coverage/cfile.(*emitState).VisitFuncs(0xc0000bc1e0, 0xc0000122a0)
/home/swarming/.swarming/w/ir/x/w/goroot/src/internal/coverage/cfile/emit.go:478 +0xeb0 fp=0xc0000df450 sp=0xc0000df1f8 pc=0x5c6030
internal/coverage/encodecounter.(*CoverageDataWriter).writeCounters(0xc000014190, {0x69cbc0, 0xc0000bc1e0}, 0xc0000a2060)
...
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:193 +0x27e fp=0xc000069fe0 sp=0xc000069e20 pc=0x42431e
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x4cfca1
created by runtime.createfing in goroutine 1
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:163 +0xab
FAIL example.com/cov/onlytest 0.136s
ok example.com/cov/withtest 0.462s coverage: 16.3% of statements in all
FAIL
script_test.go:159: FAIL: testdata/script/cover_pkgall_imports.txt:13: go test -coverpkg=all ./...: exit status 1
--- FAIL: TestScript/cover_pkgall_imports (84.29s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,509,160,640 | puppeteer | [Bug]: BrowserWebSocketTrasnport errors are ignored | ### Minimal, reproducible example
```TypeScript
// websocket of connected chrome instance receives error as result of message
```
### Background
Working with `@cloudflare/puppeteer` I got a WebSocket error when attempting to capture a screenshot that was beyond the size limit for a workerd WebSocket message (see https://github.com/cloudflare/workerd/issues/2667)
The resulting error was ignored by puppeteer, meaning my automation was left in a hanging state (could not cleanup browser connection of capture error).
After some digging through Puppeteer, this comment seems to indicate the problem: https://github.com/puppeteer/puppeteer/blob/main/packages/puppeteer-core/src/common/BrowserWebSocketTransport.ts#L39
My naive solution was to call `onmessage` with something non-protocol like that could be caught in `Connection`, which would subsequently call `#callbacks.clear()`. Would you be open to a pull request along those lines are am I completely missing the complexity of such a change :)
### Expectation
Transport errors should probably result in a TargetClose error?
### Reality
Transport errors result in unresolved promises and hanging automations.
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
@cloudflare/puppeteer@0.0.14
### Node version
20.10.0
### Package manager
npm
### Package manager version
10.2.3
### Operating system
macOS | bug,confirmed,P3 | low | Critical |
2,509,172,127 | TypeScript | False positive for "Unreachable code detected" for code following `switch`/`case` with `allowUnreachableCode: false` | ### 🔎 Search Terms
"false positive allowUnreachableCode"
"allowUnreachableCode case statement"
"Unreachable code detected case statement"
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about `switch`/`case`
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAYg9nKBeKAiARgQwE6qgHzSwC9UAoMgYzgDsBnYKAMxuSgAoA3TAGwFcIALlgIAlMJp8AtugjZkAPigBvMlHVQ6AdwCWwSgAsuvAaJVqNlypjrQMOVIIuWXUbBGB9srAIwBuZxdrWyJMUidXV3dPbygAJgDLAF8KS2ivVhtbbGAAOQhOOXZUGjhIAEJUUQCUqloGZho4tmN+IRE4cShJGTlFc0ttPUNW0wHI4LssXAjItI8MqH9AqxspsMcVyPTYhK31ABMIJkw+HmBZufUdzLpsvIKikrKISurAlNrqekYWAGYWtw2sJ4J0JNJZPIkEpVJYdExRtAkMjQrgzLDtgtYstklAIDwQvCOECBMgUfZSOj9m4saw9slAoEblAsnIHoVsMVShUqjUKEw+DRKMAdLQWXc2fkOewpBA7pgAOYQAD8wgY2B0NAVXRoj3kGOABmwcC03QgpoAothjZzZfKlVBlcq0AAVAw6OhQahHKBaGx4gAekGFEAOUGAiFkUEF7kwhkw6B4EAAdLyyLUWMUEGA6Hh-aDRGQWHEs2Vc+KOoX-qWc3nPQWyEA
### 💻 Code
```ts
type Foo = "bar" | "baz"
const fn = (value: Foo): number => {
switch(value) {
case "bar":
return 1;
case "baz":
return 2;
}
return assertNever("nope!");
}
```
### 🙁 Actual behavior
```ts
return assertNever("nope!");
```
is highlighted as "Unreachable code detected". The [docs](https://www.typescriptlang.org/tsconfig/#allowUnreachableCode), however, state:
> This does not affect errors on the basis of code which appears to be unreachable due to type analysis.
### 🙂 Expected behavior
No error. This should be the same as the other case form, the if form, and various similar constructs where even though it's _allegedly_ exhaustive, that's just based off of type information (I would like to have a runtime assertion in this case).
### Additional information about the issue
Possibly related to: https://github.com/microsoft/TypeScript/issues/18882 | Suggestion,Awaiting More Feedback | low | Critical |
2,509,174,321 | pytorch | DISABLED test_profiler_mark_wrapper_call_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_profiler_mark_wrapper_call_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29751966360).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 30 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_profiler_mark_wrapper_call_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 9497, in test_profiler_mark_wrapper_call
with profile() as prof:
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/profiler/profiler.py", line 744, in __enter__
self.start()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/profiler/profiler.py", line 754, in start
self._transit_action(ProfilerAction.NONE, self.current_action)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/profiler/profiler.py", line 793, in _transit_action
action()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/profiler/profiler.py", line 174, in start_trace
self.profiler._start_trace()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/profiler.py", line 339, in _start_trace
_enable_profiler(self.config(), self.kineto_activities)
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_profiler_mark_wrapper_call_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,509,175,892 | pytorch | DISABLED test_input_mutation2_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_input_mutation2_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29747151692).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 18 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_input_mutation2_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 6652, in test_input_mutation2
actual1 = opt_fn(arg2)
^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 6640, in fn
def fn(a):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_input_mutation2_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire | high priority,triaged,module: flaky-tests,skipped,oncall: pt2,module: dynamic shapes,module: inductor | medium | Critical |
2,509,181,421 | godot | `GLES3` shadeless colors are incorrect compared to `Vulkan` [3D] | ### Tested versions
4.3 stable
### System information
Windows 10
### Issue description
Left side is a SubViewport contains a `MeshInstance3D` plane.
Right side is a `Sprite2D`
GLES3:

Vulkan:

### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
[minimal_reproduction_project.zip](https://github.com/user-attachments/files/16901561/test.zip)
| bug,discussion,topic:rendering,needs testing | low | Major |
2,509,185,645 | go | os: document that fs.ReadDirFile.ReadDir doesn't restart each time | ### Go version
go version go1.22.6 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/hajimehoshi/Library/Caches/go-build'
GOENV='/Users/hajimehoshi/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/hajimehoshi/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/hajimehoshi/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/cj/73zbb35j0qx5t4b6rnqq0__h0000gn/T/go-build3027760310=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Run this program
```go
package main
import (
"fmt"
"io/fs"
"os"
)
func main() {
fmt.Println("-- os.Open --")
{
dir, err := os.Open("/")
if err != nil {
panic(err)
}
for i := 0; i < 3; i++ {
ents, err := dir.ReadDir(0)
if err != nil {
panic(err)
}
fmt.Println(ents)
}
}
fmt.Println("-- os.DirFS.Open --")
{
dir, err := os.DirFS("/").Open(".")
if err != nil {
panic(err)
}
for i := 0; i < 3; i++ {
ents, err := dir.(fs.ReadDirFile).ReadDir(0)
if err != nil {
panic(err)
}
fmt.Println(ents)
}
}
}
```
### What did you see happen?
I could reproduce the issue on my local macOS machine and the playground.
On the playground (https://go.dev/play/p/tBUins57hF-),
```
-- os.Open --
[d tmpfs/ d home/ d bin/ d var/ d etc/ d lib/ d tmp/ L lib64 d root/ d dev/ d usr/ d sys/ d proc/ - .dockerenv]
[]
[]
-- os.DirFS.Open --
[d tmpfs/ d home/ d bin/ d var/ d etc/ d lib/ d tmp/ L lib64 d root/ d dev/ d usr/ d sys/ d proc/ - .dockerenv]
[]
[]
```
### What did you expect to see?
```
-- os.Open --
[d tmpfs/ d home/ d bin/ d var/ d etc/ d lib/ d tmp/ L lib64 d root/ d dev/ d usr/ d sys/ d proc/ - .dockerenv]
[]
[]
-- os.DirFS.Open --
[d tmpfs/ d home/ d bin/ d var/ d etc/ d lib/ d tmp/ L lib64 d root/ d dev/ d usr/ d sys/ d proc/ - .dockerenv]
[d tmpfs/ d home/ d bin/ d var/ d etc/ d lib/ d tmp/ L lib64 d root/ d dev/ d usr/ d sys/ d proc/ - .dockerenv]
[d tmpfs/ d home/ d bin/ d var/ d etc/ d lib/ d tmp/ L lib64 d root/ d dev/ d usr/ d sys/ d proc/ - .dockerenv]
```
`os.File`'s [`ReadDir`](https://pkg.go.dev/os#File.ReadDir) works as expected.
```
If n <= 0, ReadDir returns all the DirEntry records remaining in the directory. When it succeeds, it returns a nil error (not io.EOF).
```
On the other hand, the document for [fs.ReadDirFile](https://pkg.go.dev/io/fs#ReadDirFile) says
```
// If n <= 0, ReadDir returns all the DirEntry values from the directory
// in a single slice. In this case, if ReadDir succeeds (reads all the way
// to the end of the directory), it returns the slice and a nil error.
// If it encounters an error before the end of the directory,
// ReadDir returns the DirEntry list read until that point and a non-nil error.
```
so `ReadDir` should return all the entries whatever the internal offset is, shouldn't it? | Documentation,help wanted,NeedsFix | low | Critical |
2,509,211,238 | rust | ICE: symbol-mangling-version=v0 attempt to subtract with overflow | ### Code
```rust
// main.rs
type Value<'v> = &'v ();
trait Trait: Fn(Value) -> Value {}
impl<F: Fn(Value) -> Value> Trait for F {}
fn main() {
let _: Box<dyn Trait> = Box::new(|v: Value| v);
}
```
using a rustc built from current master, with the following added to config.toml to enable overflow checks:
```toml
# config.toml
[rust]
overflow-checks = true
overflow-checks-std = false
```
### Error output
`rustc --edition=2021 main.rs -Csymbol-mangling-version=v0`
```console
thread 'rustc' panicked at compiler/rustc_symbol_mangling/src/v0.rs:298:22:
attempt to subtract with overflow
error: the compiler unexpectedly panicked. this is a bug.
note: compiler flags: -C symbol-mangling-version=v0
query stack during panic:
#0 [symbol_name] computing the symbol for `<alloc::boxed::Box<dyn Trait> as core::ops::drop::Drop>::drop`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
```
This is the subtraction:
https://github.com/rust-lang/rust/blob/9c01301c52df5d2d7b6fe337707a74e011d68d6f/compiler/rustc_symbol_mangling/src/v0.rs#L298
<details><summary><strong>Backtrace</strong></summary>
<p>
```console
17: 0x7d00a140a653 - core::panicking::panic_fmt::h2f9f00fe286aff73
at /git/rust/library/core/src/panicking.rs:74:14
18: 0x7d00a13f8bf7 - core::panicking::panic_const::panic_const_sub_overflow::h7bbb1bf924cbecd1
at /git/rust/library/core/src/panicking.rs:181:21
19: 0x7d00a01921a3 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_region
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:298:22
20: 0x7d00a0192697 - <rustc_middle[9b2517ccd1ae0e8f]::ty::region::Region as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Print<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler>>::print
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:310:9
21: 0x7d00a0192697 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_type
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:366:23
22: 0x7d00a0193382 - <rustc_middle[9b2517ccd1ae0e8f]::ty::Ty as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Print<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler>>::print
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:316:9
23: 0x7d00a0193382 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_dyn_existential::{closure#0}
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:531:56
24: 0x7d00a0193382 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler>::in_binder::<rustc_type_ir[366b997f3da86b55]::predicate::ExistentialPredicate<rustc_middle[9b2517ccd1ae0e8f]::ty::context::TyCtxt>, <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_dyn_existential::{closure#0}>
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:195:9
25: 0x7d00a0193382 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_dyn_existential
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:513:9
26: 0x7d00a0192a6d - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_type
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:466:17
27: 0x7d00a0195d79 - <rustc_middle[9b2517ccd1ae0e8f]::ty::Ty as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Print<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler>>::print
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:316:9
28: 0x7d00a0195d79 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::path_generic_args::<<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path::{closure#3}>
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:810:24
29: 0x7d00a01903ce - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:166:40
30: 0x7d00a01903ce - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_def_path
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:217:9
31: 0x7d00a01929fa - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_type
32: 0x7d00a01917d1 - <rustc_middle[9b2517ccd1ae0e8f]::ty::Ty as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Print<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler>>::print
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:316:9
33: 0x7d00a01917d1 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_impl_path
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:277:17
34: 0x7d00a01904e7 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:125:17
35: 0x7d00a01904e7 - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_def_path
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:217:9
36: 0x7d00a0190bab - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path::{closure#4}
37: 0x7d00a0190bab - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler>::path_append_ns::<<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path::{closure#4}>
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:161:9
38: 0x7d00a0190bab - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::path_append::<<rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path::{closure#4}>
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:775:9
39: 0x7d00a0190bab - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::default_print_def_path
at /git/rust/compiler/rustc_middle/src/ty/print/mod.rs:182:17
40: 0x7d00a0190bab - <rustc_symbol_mangling[91d39b3b00843abe]::v0::SymbolMangler as rustc_middle[9b2517ccd1ae0e8f]::ty::print::Printer>::print_def_path
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:217:9
41: 0x7d00a018ebf9 - rustc_symbol_mangling[91d39b3b00843abe]::v0::mangle
at /git/rust/compiler/rustc_symbol_mangling/src/v0.rs:66:9
42: 0x7d00a01cc3d6 - rustc_symbol_mangling[91d39b3b00843abe]::compute_symbol_name::<rustc_symbol_mangling[91d39b3b00843abe]::symbol_name_provider::{closure#0}>
at /git/rust/compiler/rustc_symbol_mangling/src/lib.rs:261:38
43: 0x7d00a01cc3d6 - rustc_symbol_mangling[91d39b3b00843abe]::symbol_name_provider
at /git/rust/compiler/rustc_symbol_mangling/src/lib.rs:134:23
44: 0x7d009efe3891 - rustc_query_impl[519194f2780935b8]::query_impl::symbol_name::dynamic_query::{closure#2}::{closure#0}
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:283:9
45: 0x7d009efe3891 - rustc_query_impl[519194f2780935b8]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[519194f2780935b8]::query_impl::symbol_name::dynamic_query::{closure#2}::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:548:18
46: 0x7d009f485b4e - rustc_query_impl[519194f2780935b8]::query_impl::symbol_name::dynamic_query::{closure#2}
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:622:25
47: 0x7d009f485b4e - <rustc_query_impl[519194f2780935b8]::query_impl::symbol_name::dynamic_query::{closure#2} as core[ea291473130fa9df]::ops::function::FnOnce<(rustc_middle[9b2517ccd1ae0e8f]::ty::context::TyCtxt, rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance)>>::call_once
at /git/rust/library/core/src/ops/function.rs:250:5
48: 0x7d009f0c8f0b - <rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false> as rustc_query_system[3bddb1922b26a9d7]::query::config::QueryConfig<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>>::compute
at /git/rust/compiler/rustc_query_impl/src/lib.rs:110:9
49: 0x7d009f0c8f0b - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:478:72
50: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context::<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:82:9
51: 0x7d009f0c8f0b - <std[99259a4298f3b51c]::thread::local::LocalKey<core[ea291473130fa9df]::cell::Cell<*const ()>>>::try_with::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/library/std/src/thread/local.rs:283:12
52: 0x7d009f0c8f0b - <std[99259a4298f3b51c]::thread::local::LocalKey<core[ea291473130fa9df]::cell::Cell<*const ()>>>::with::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/library/std/src/thread/local.rs:260:9
53: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context::<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:79:9
54: 0x7d009f0c8f0b - <rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:151:13
55: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context::<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:134:9
56: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:112:36
57: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context_opt::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:101:18
58: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:112:5
59: 0x7d009f0c8f0b - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context::<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:125:5
60: 0x7d009f0c8f0b - <rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:136:9
61: 0x7d009f0c8f0b - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:478:18
62: 0x7d009f0c8f0b - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt, false>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:414:9
63: 0x7d009f0c8f0b - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::try_execute_query::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt, false>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:357:13
64: 0x7d009f3e9a63 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:809:32
65: 0x7d009f3e9a63 - stacker[abf07dca737b5dda]::maybe_grow::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>
at /home/david/.cargo/registry/src/index.crates.io-6f17d22bba15001f/stacker-0.1.17/src/lib.rs:55:9
66: 0x7d009f3e9a63 - rustc_data_structures[adfc6085cde69c7b]::stack::ensure_sufficient_stack::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>
at /git/rust/compiler/rustc_data_structures/src/stack.rs:17:5
67: 0x7d009f3e9a63 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:809:5
68: 0x7d009f3e9a63 - rustc_query_impl[519194f2780935b8]::query_impl::symbol_name::get_query_non_incr::__rust_end_short_backtrace
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:598:26
69: 0x7d00a0b840b3 - rustc_middle[9b2517ccd1ae0e8f]::query::plumbing::query_get_at::<rustc_query_system[3bddb1922b26a9d7]::query::caches::DefaultCache<rustc_middle[9b2517ccd1ae0e8f]::ty::instance::Instance, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 16usize]>>>
at /git/rust/compiler/rustc_middle/src/query/plumbing.rs:143:17
70: 0x7d00a0b766ab - <rustc_middle[9b2517ccd1ae0e8f]::query::plumbing::TyCtxtAt>::symbol_name
at /git/rust/compiler/rustc_middle/src/query/plumbing.rs:422:31
71: 0x7d00a0b766ab - <rustc_middle[9b2517ccd1ae0e8f]::ty::context::TyCtxt>::symbol_name
at /git/rust/compiler/rustc_middle/src/query/plumbing.rs:413:35
72: 0x7d00a0b766ab - <rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>::symbol_name
at /git/rust/compiler/rustc_middle/src/mir/mono.rs:99:43
73: 0x7d009dbdd02a - rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct::<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}
at /git/rust/compiler/rustc_monomorphize/src/partitioning.rs:1078:48
74: 0x7d009dbdd02a - core[ea291473130fa9df]::iter::adapters::map::map_fold::<&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, (&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), (), rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}, core[ea291473130fa9df]::iter::traits::iterator::Iterator::for_each::call<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>::extend_trusted<core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>::{closure#0}>::{closure#0}>::{closure#0}
at /git/rust/library/core/src/iter/adapters/map.rs:88:28
75: 0x7d009dbdd02a - <core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem> as core[ea291473130fa9df]::iter::traits::iterator::Iterator>::fold::<(), core[ea291473130fa9df]::iter::adapters::map::map_fold<&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, (&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), (), rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}, core[ea291473130fa9df]::iter::traits::iterator::Iterator::for_each::call<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>::extend_trusted<core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>::{closure#0}>::{closure#0}>::{closure#0}>
at /git/rust/library/core/src/slice/iter/macros.rs:232:27
76: 0x7d009dbdd02a - <core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}> as core[ea291473130fa9df]::iter::traits::iterator::Iterator>::fold::<(), core[ea291473130fa9df]::iter::traits::iterator::Iterator::for_each::call<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>::extend_trusted<core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>::{closure#0}>::{closure#0}>
at /git/rust/library/core/src/iter/adapters/map.rs:128:9
77: 0x7d009dbc67cf - <core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}> as core[ea291473130fa9df]::iter::traits::iterator::Iterator>::for_each::<<alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>::extend_trusted<core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>::{closure#0}>
at /git/rust/library/core/src/iter/traits/iterator.rs:813:9
78: 0x7d009dbc67cf - <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>::extend_trusted::<core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>
at /git/rust/library/alloc/src/vec/mod.rs:3125:17
79: 0x7d009dbc67cf - <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)> as alloc[96b93f20010260ed]::vec::spec_extend::SpecExtend<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>>::spec_extend
at /git/rust/library/alloc/src/vec/spec_extend.rs:26:9
80: 0x7d009dbc67cf - <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)> as alloc[96b93f20010260ed]::vec::spec_from_iter_nested::SpecFromIterNested<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>>::from_iter
at /git/rust/library/alloc/src/vec/spec_from_iter_nested.rs:60:9
81: 0x7d009dbc67cf - <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)> as alloc[96b93f20010260ed]::vec::spec_from_iter::SpecFromIter<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName), core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>>::from_iter
at /git/rust/library/alloc/src/vec/spec_from_iter.rs:33:9
82: 0x7d009db9d606 - <alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)> as core[ea291473130fa9df]::iter::traits::collect::FromIterator<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>::from_iter::<core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}>>
at /git/rust/library/alloc/src/vec/mod.rs:2989:9
83: 0x7d009db9d606 - <core[ea291473130fa9df]::iter::adapters::map::Map<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>, rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>::{closure#0}> as core[ea291473130fa9df]::iter::traits::iterator::Iterator>::collect::<alloc[96b93f20010260ed]::vec::Vec<(&rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem, rustc_middle[9b2517ccd1ae0e8f]::ty::SymbolName)>>
at /git/rust/library/core/src/iter/traits/iterator.rs:2000:9
84: 0x7d009db9d606 - rustc_monomorphize[8096c1a7ae249a39]::partitioning::assert_symbols_are_distinct::<core[ea291473130fa9df]::slice::iter::Iter<rustc_middle[9b2517ccd1ae0e8f]::mir::mono::MonoItem>>
at /git/rust/compiler/rustc_monomorphize/src/partitioning.rs:1078:77
85: 0x7d009dc07516 - rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}
at /git/rust/compiler/rustc_monomorphize/src/partitioning.rs:1138:16
86: 0x7d009dc07516 - <core[ea291473130fa9df]::panic::unwind_safe::AssertUnwindSafe<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}> as core[ea291473130fa9df]::ops::function::FnOnce<()>>::call_once
at /git/rust/library/core/src/panic/unwind_safe.rs:272:9
87: 0x7d009dc07516 - std[99259a4298f3b51c]::panicking::try::do_call::<core[ea291473130fa9df]::panic::unwind_safe::AssertUnwindSafe<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}>, ()>
at /git/rust/library/std/src/panicking.rs:557:40
88: 0x7d009dc07516 - std[99259a4298f3b51c]::panicking::try::<(), core[ea291473130fa9df]::panic::unwind_safe::AssertUnwindSafe<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}>>
at /git/rust/library/std/src/panicking.rs:520:19
89: 0x7d009dc07516 - std[99259a4298f3b51c]::panic::catch_unwind::<core[ea291473130fa9df]::panic::unwind_safe::AssertUnwindSafe<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}>, ()>
at /git/rust/library/std/src/panic.rs:345:14
90: 0x7d009dc07516 - <rustc_data_structures[adfc6085cde69c7b]::sync::parallel::ParallelGuard>::run::<(), rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}>
at /git/rust/compiler/rustc_data_structures/src/sync/parallel.rs:29:9
91: 0x7d009dc07516 - rustc_data_structures[adfc6085cde69c7b]::sync::parallel::disabled::join::<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#0}, rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}, &[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit], ()>::{closure#0}
at /git/rust/compiler/rustc_data_structures/src/sync/parallel.rs:72:21
92: 0x7d009dc07516 - rustc_data_structures[adfc6085cde69c7b]::sync::parallel::parallel_guard::<(core[ea291473130fa9df]::option::Option<&[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit]>, core[ea291473130fa9df]::option::Option<()>), rustc_data_structures[adfc6085cde69c7b]::sync::parallel::disabled::join<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#0}, rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}, &[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit], ()>::{closure#0}>
at /git/rust/compiler/rustc_data_structures/src/sync/parallel.rs:45:15
93: 0x7d009dc07516 - rustc_data_structures[adfc6085cde69c7b]::sync::parallel::disabled::join::<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#0}, rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}, &[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit], ()>
at /git/rust/compiler/rustc_data_structures/src/sync/parallel.rs:70:22
94: 0x7d009dbb61a0 - rustc_data_structures[adfc6085cde69c7b]::sync::parallel::enabled::join::<rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#0}, rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}::{closure#1}, &[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit], ()>
at /git/rust/compiler/rustc_data_structures/src/sync/parallel.rs:169:13
95: 0x7d009dbb61a0 - rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}
at /git/rust/compiler/rustc_monomorphize/src/partitioning.rs:1132:9
96: 0x7d009dbb61a0 - <rustc_data_structures[adfc6085cde69c7b]::profiling::VerboseTimingGuard>::run::<(&[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit], ()), rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}>
at /git/rust/compiler/rustc_data_structures/src/profiling.rs:753:9
97: 0x7d009dbb61a0 - <rustc_session[8687623414b379bc]::session::Session>::time::<(&[rustc_middle[9b2517ccd1ae0e8f]::mir::mono::CodegenUnit], ()), rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items::{closure#0}>
at /git/rust/compiler/rustc_session/src/utils.rs:16:9
98: 0x7d009db9db94 - rustc_monomorphize[8096c1a7ae249a39]::partitioning::collect_and_partition_mono_items
at /git/rust/compiler/rustc_monomorphize/src/partitioning.rs:1131:30
99: 0x7d009f006185 - rustc_query_impl[519194f2780935b8]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:283:9
100: 0x7d009f006185 - rustc_query_impl[519194f2780935b8]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[519194f2780935b8]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:548:18
101: 0x7d009f385b29 - rustc_query_impl[519194f2780935b8]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:622:25
102: 0x7d009f385b29 - <rustc_query_impl[519194f2780935b8]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[ea291473130fa9df]::ops::function::FnOnce<(rustc_middle[9b2517ccd1ae0e8f]::ty::context::TyCtxt, ())>>::call_once
at /git/rust/library/core/src/ops/function.rs:250:5
103: 0x7d009f067889 - <rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false> as rustc_query_system[3bddb1922b26a9d7]::query::config::QueryConfig<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>>::compute
at /git/rust/compiler/rustc_query_impl/src/lib.rs:110:9
104: 0x7d009f067889 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:478:72
105: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context::<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:82:9
106: 0x7d009f067889 - <std[99259a4298f3b51c]::thread::local::LocalKey<core[ea291473130fa9df]::cell::Cell<*const ()>>>::try_with::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/library/std/src/thread/local.rs:283:12
107: 0x7d009f067889 - <std[99259a4298f3b51c]::thread::local::LocalKey<core[ea291473130fa9df]::cell::Cell<*const ()>>>::with::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/library/std/src/thread/local.rs:260:9
108: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context::<rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:79:9
109: 0x7d009f067889 - <rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:151:13
110: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context::<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:134:9
111: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:112:36
112: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context_opt::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:101:18
113: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_context::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:112:5
114: 0x7d009f067889 - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::with_related_context::<<rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>::{closure#0}, rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:125:5
115: 0x7d009f067889 - <rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt as rustc_query_system[3bddb1922b26a9d7]::query::QueryContext>::start_query::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:136:9
116: 0x7d009f067889 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:478:18
117: 0x7d009f067889 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::execute_job::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt, false>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:414:9
118: 0x7d009f067889 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::try_execute_query::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt, false>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:357:13
119: 0x7d009f2f8614 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:809:32
120: 0x7d009f2f8614 - stacker[abf07dca737b5dda]::maybe_grow::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>
at /home/david/.cargo/registry/src/index.crates.io-6f17d22bba15001f/stacker-0.1.17/src/lib.rs:55:9
121: 0x7d009f2f8614 - rustc_data_structures[adfc6085cde69c7b]::stack::ensure_sufficient_stack::<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>, rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>::{closure#0}>
at /git/rust/compiler/rustc_data_structures/src/stack.rs:17:5
122: 0x7d009f2f8614 - rustc_query_system[3bddb1922b26a9d7]::query::plumbing::get_query_non_incr::<rustc_query_impl[519194f2780935b8]::DynamicConfig<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[519194f2780935b8]::plumbing::QueryCtxt>
at /git/rust/compiler/rustc_query_system/src/query/plumbing.rs:809:5
123: 0x7d009f2f8614 - rustc_query_impl[519194f2780935b8]::query_impl::collect_and_partition_mono_items::get_query_non_incr::__rust_end_short_backtrace
at /git/rust/compiler/rustc_query_impl/src/plumbing.rs:598:26
124: 0x7d009d0bb09d - rustc_middle[9b2517ccd1ae0e8f]::query::plumbing::query_get_at::<rustc_query_system[3bddb1922b26a9d7]::query::caches::SingleCache<rustc_middle[9b2517ccd1ae0e8f]::query::erase::Erased<[u8; 24usize]>>>
at /git/rust/compiler/rustc_middle/src/query/plumbing.rs:143:17
125: 0x7d009d0bb09d - <rustc_middle[9b2517ccd1ae0e8f]::query::plumbing::TyCtxtAt>::collect_and_partition_mono_items
at /git/rust/compiler/rustc_middle/src/query/plumbing.rs:422:31
126: 0x7d009d0bb09d - <rustc_middle[9b2517ccd1ae0e8f]::ty::context::TyCtxt>::collect_and_partition_mono_items
at /git/rust/compiler/rustc_middle/src/query/plumbing.rs:413:35
127: 0x7d009d0bb09d - rustc_codegen_ssa[64ca6d8878cfcd9f]::base::codegen_crate::<rustc_codegen_llvm[81065f1067ec6a8c]::LlvmCodegenBackend>
at /git/rust/compiler/rustc_codegen_ssa/src/base.rs:593:29
128: 0x7d009d2cde84 - <rustc_codegen_llvm[81065f1067ec6a8c]::LlvmCodegenBackend as rustc_codegen_ssa[64ca6d8878cfcd9f]::traits::backend::CodegenBackend>::codegen_crate
at /git/rust/compiler/rustc_codegen_llvm/src/lib.rs:362:18
129: 0x7d009cff4d47 - rustc_interface[48237efac66ec8e9]::passes::start_codegen::{closure#0}
at /git/rust/compiler/rustc_interface/src/passes.rs:1057:9
130: 0x7d009cff4d47 - <rustc_data_structures[adfc6085cde69c7b]::profiling::VerboseTimingGuard>::run::<alloc[96b93f20010260ed]::boxed::Box<dyn core[ea291473130fa9df]::any::Any>, rustc_interface[48237efac66ec8e9]::passes::start_codegen::{closure#0}>
at /git/rust/compiler/rustc_data_structures/src/profiling.rs:753:9
131: 0x7d009cff4d47 - <rustc_session[8687623414b379bc]::session::Session>::time::<alloc[96b93f20010260ed]::boxed::Box<dyn core[ea291473130fa9df]::any::Any>, rustc_interface[48237efac66ec8e9]::passes::start_codegen::{closure#0}>
at /git/rust/compiler/rustc_session/src/utils.rs:16:9
132: 0x7d009cec02de - rustc_interface[48237efac66ec8e9]::passes::start_codegen
at /git/rust/compiler/rustc_interface/src/passes.rs:1056:19
133: 0x7d009d01e592 - <rustc_interface[48237efac66ec8e9]::queries::Linker>::codegen_and_build_linker
at /git/rust/compiler/rustc_interface/src/queries.rs:129:31
134: 0x7d009cc4364d - rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}
at /git/rust/compiler/rustc_driver_impl/src/lib.rs:460:25
135: 0x7d009cc4364d - <rustc_middle[9b2517ccd1ae0e8f]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#1}
at /git/rust/compiler/rustc_middle/src/ty/context.rs:1320:37
136: 0x7d009cc4364d - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context::<<rustc_middle[9b2517ccd1ae0e8f]::ty::context::GlobalCtxt>::enter<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#1}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#0}
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:82:9
137: 0x7d009cc4364d - <std[99259a4298f3b51c]::thread::local::LocalKey<core[ea291473130fa9df]::cell::Cell<*const ()>>>::try_with::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context<<rustc_middle[9b2517ccd1ae0e8f]::ty::context::GlobalCtxt>::enter<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#1}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#0}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>
at /git/rust/library/std/src/thread/local.rs:283:12
138: 0x7d009cc4364d - <std[99259a4298f3b51c]::thread::local::LocalKey<core[ea291473130fa9df]::cell::Cell<*const ()>>>::with::<rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context<<rustc_middle[9b2517ccd1ae0e8f]::ty::context::GlobalCtxt>::enter<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#1}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#0}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>
at /git/rust/library/std/src/thread/local.rs:260:9
139: 0x7d009cc4364d - rustc_middle[9b2517ccd1ae0e8f]::ty::context::tls::enter_context::<<rustc_middle[9b2517ccd1ae0e8f]::ty::context::GlobalCtxt>::enter<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>::{closure#1}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>
at /git/rust/compiler/rustc_middle/src/ty/context/tls.rs:79:9
140: 0x7d009cc4364d - <rustc_middle[9b2517ccd1ae0e8f]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>
at /git/rust/compiler/rustc_middle/src/ty/context.rs:1320:9
141: 0x7d009cc97a23 - rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}
at /git/rust/compiler/rustc_driver_impl/src/lib.rs:459:13
142: 0x7d009cc97a23 - <rustc_interface[48237efac66ec8e9]::interface::Compiler>::enter::<rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}::{closure#1}, core[ea291473130fa9df]::result::Result<core[ea291473130fa9df]::option::Option<rustc_interface[48237efac66ec8e9]::queries::Linker>, rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>>
at /git/rust/compiler/rustc_interface/src/queries.rs:210:19
143: 0x7d009cbd24b8 - rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}
at /git/rust/compiler/rustc_driver_impl/src/lib.rs:393:22
144: 0x7d009cbd24b8 - rustc_interface[48237efac66ec8e9]::interface::run_compiler::<core[ea291473130fa9df]::result::Result<(), rustc_span[95089f0edfb2a8dc]::ErrorGuaranteed>, rustc_driver_impl[84137d66a5af97bd]::run_compiler::{closure#0}>::{closure#1}
at /git/rust/compiler/rustc_interface/src/interface.rs:502:27
```
</p>
</details>
Tracking issue for v0 symbol mangling: https://github.com/rust-lang/rust/issues/60705 | A-lifetimes,I-ICE,P-medium,T-compiler,C-bug | low | Critical |
2,509,236,120 | flutter | [video_player_android] Incorrect orientation when previewing captured videos | ### Workaround
**Pin `video_player_android` to 2.7.1.**
### Steps to reproduce
#### With the official example
1. Run the example of `camera`
2. Select a camera and record for a few seconds in portrait up.
3. See the preview video being orientated incorrectly.
#### With other Flutter packages
Run the example of `package:wechat_camera_picker`.
### Expected results
The video preview should have a correct rotation.
### Actual results
The video preview shows an incorrect rotation. This is likely affected by https://github.com/flutter/packages/pull/6456, and we also have similar problems with `camera_android_camerax`: https://github.com/flutter/flutter/issues/149177, and `camera_android`: https://github.com/flutter/flutter/issues/150549.
### Code sample
https://github.com/flutter/packages/tree/main/packages/camera/camera/example
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/8b0f6067-31ca-47f8-a945-84b6ed34cc6e
</details>
### Logs
N/A
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.25.0-1.0.pre.270, on Microsoft Windows [Version 10.0.22631.4037], locale en-US)
• Flutter version 3.25.0-1.0.pre.270 on channel master at X:\SDK\flutter-master
• Upstream repository https://github.com/flutter/flutter
• Framework revision a7cd788d80 (16 hours ago), 2024-09-05 05:57:40 -0400
• Engine revision 34b61eb53b
• Dart version 3.6.0 (build 3.6.0-216.0.dev)
• DevTools version 2.39.0
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at X:\Android\SDK
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = X:\Android\SDK
• ANDROID_SDK_ROOT = X:\Android\SDK
• Java binary at: X:\IDEs\AndroidStudio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
```
</details>
| platform-android,p: video_player,package,has reproducible steps,P1,team-android,triaged-android,found in release: 3.24,found in release: 3.25 | medium | Critical |
2,509,324,859 | TypeScript | "const" was transformed to "var" when target is "esnext" | ### 🔎 Search Terms
transform, const, var, target, ECMA, esnext
### 🕗 Version & Regression Information
- This is the behavior in every version I tried from Version 5.4 to Version 5.7.0-dev.20240904
### ⏯ Playground Link
_No response_
### 💻 Code
```ts
export const cilBlurLinear : string [ ] = [ , ]
const [ , ] = cilBlurLinear;
```
### 🙁 Actual behavior
The JS code generated by tsc is as follows:
```js
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.cilBlurLinear = void 0;
exports.cilBlurLinear = [,];
var ;
```
### 🙂 Expected behavior
“const” should be translated as "const" instead of "var", as the target in my configuration file is "esnext".
### Additional information about the issue
tsconfig.json:
```json
{
"compilerOptions": {
"target": "ESNext",
"module": "esnext",
"moduleResolution": "Node",
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"lib": ["es2023","dom"],
"noEmitOnError":true,
"force":true,
"strict":true
},
"include": [
"TScorpus/*"
],
"exclude": [
"node_modules"
]
}
``` | Bug,Help Wanted | low | Critical |
2,509,370,716 | material-ui | [icons] Breaking changes in icons package layout | _Assuming https://github.com/mui/material-ui/pull/43624 has been merged consider all or some of the following breaking changes in the next major._
- [ ] Remove `./esm/*` and `./esm/*.js` exports
- [ ] (To be decided) Remove commonjs (18.7MB => 6.1MB) #26310 #26240
- [ ] Remove support for top-level import `.`, it's a footgun, we could just disallow importing from it altogether. It would reduce package size as well to be able to remove the barrel file. (minus 2.3MB)
- [ ] Add a negative export for `./utils`
```json
"./utils/*": null,
```
- [ ] Use a single type declaration file for all icons instead of one for each icon. This could also reduce install size.
- [ ] Remove the `node` condition once we fully support ESM https://github.com/mui/material-ui/pull/43264
**Search keywords**: | package: icons,enhancement | low | Minor |
2,509,377,394 | pytorch | Add restrictions so that Inductor never uses more memory than eager | ### 🚀 The feature, motivation and pitch
IMO, we should do something so that we never use more memory than eager-mode in Inductor. Certain patterns can lead to fairly pathological fusion decision in Inductor that lead to a much higher memory mark.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Major |
2,509,390,676 | flutter | [camera_android] Reland the Impeller support and apply a proper rotation fix | ### What happened
Previous attempts of Impeller support broke the camera preview; see https://github.com/flutter/flutter/issues/150549. The revert has landed. However, no further actions were taken and the library did not gain valid support, such as https://github.com/flutter/packages/pull/7044.
### Proposal
Apply a similar fix like https://github.com/flutter/packages/pull/7044 or fix the previous Impeller support changes then reland. | platform-android,p: camera,package,c: proposal,P3,e: impeller,team-android,triaged-android | low | Major |
2,509,396,171 | terminal | CTRL+ALT+left / right does not work in hyper-v session | ### Windows Terminal version
1.21.2361.0
### Windows build number
10.0.22631.0
### Other Software
Hyper-V - I didn't do anything but enable it from stock Windows - version says 10.0.22621.1
I installed the Windows Terminal from winget
I'm using keyviz from `scoop install keyviz` to visualize the keypresses.
### Steps to reproduce
Open up a split screen using alt+click

### Expected Behavior
When I hold down alt+shift+left to resize the pane I expect to resize the screen
Instead pushes these keys:

also, in general, alt+left is sending alt+home on repeated presses

Also, I face no such issue in my host machine. And even in Hyper-V, when I use `alt+left` in Konsole (for Windows - downloaded from https://cdn.kde.org/ci-builds/utilities/konsole/master/windows/ ) - I don't get the `alt+home` - nor do I get that in VSCode running in Hyper-V.
### Actual Behavior
See above | Area-Input,Issue-Bug,Product-Terminal,Priority-3 | low | Minor |
2,509,402,523 | flutter | Enable `CupertinoPopupSurface`'s backdrop filter test on skwasm once the Skia bug is fixed | https://github.com/flutter/flutter/pull/151430 added a backdrop filter to `CupertinoPopupSurface`, but the filter does not work on skwasm due to a Skia bug https://github.com/flutter/flutter/issues/152026. Once the issue is closed, we should unskip these tests. | f: cupertino,P3,team-design,triaged-design | low | Critical |
2,509,417,076 | next.js | app router does not resolve routes according to `generateStaticParams` constraints | ### Link to the code that reproduces this issue
https://stackblitz.com/edit/stackblitz-starters-941nyd
### To Reproduce
1. click on "post 2" → it's a post
1. click on "post 1" → it's a post
1. click on "page 2" → **it's a post**
1. click on "page 1" → **it's a post**
### Current vs. Expected behavior
the "pages" urls should resolve with the page route, not the post route.
### Provide environment information
```bash
irrelevant, happens in stackblitz too.
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug | low | Major |
2,509,428,659 | rust | Error on bounds for unused lifetime | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
The following fails to build:
```rust
struct Foo<'a, 'b: 'a, 'c: 'a>(&'a mut &'a (), &'b (), &'c ());
fn with_foo<'b>(_f: impl FnOnce(Foo<'_, 'b, '_>)) {}
struct Bar;
impl Bar {
fn baz<'a>(&'a self, _foo: Foo<'a, '_, '_>) {}
fn call_baz<'b>(&'b self) {
with_foo::<'b>(|foo| self.baz(foo))
}
}
```
```
error[E0521]: borrowed data escapes outside of method
--> src/lib.rs:9:30
|
8 | fn call_baz<'b>(&'b self) {
| -- -------- `self` is a reference that is only valid in the method body
| |
| lifetime `'b` defined here
9 | with_foo::<'b>(|foo| self.baz(foo))
| ^^^^^^^^^^^^^
| |
| `self` escapes the method body here
| argument requires that `'b` must outlive `'static`
For more information about this error, try `rustc --explain E0521`.
error: could not compile `playground` (lib) due to 1 previous error
```
However, it works if the bounds on `'c` argument on the declaration of `Foo` are changed to either `'c: 'b`, or simply an unbounded lifetime `'c`. This seems unexpected because `'c: 'b` is a stricter bound than `'c: 'a` (and `'c` is a weaker bound than `'c: 'a`). It's even stranger because the `'c` lifetime is unused in this example.
This seems to be either a bug, unexpected limitation, or as another user suggested, an NLL regression. I created a [post on URLO](https://users.rust-lang.org/t/lifetime-bound-causing-error/117075/11?u=coder-256) that has some further discussion.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
``` | A-lifetimes,P-low,regression-from-stable-to-stable,C-bug,T-types | low | Critical |
2,509,434,040 | node | await filehandle.close() never return when using with filehandle.createWriteStream({autoClose:false}) and await pipeline() | ### Version
v20.15.1
### Platform
```text
macos Darwin Kernel Version 21.6.0
```
### Subsystem
_No response_
### What steps will reproduce the bug?
```js
const { createReadStream } = require("node:fs");
const { pipeline } = require("node:stream/promises");
const fsPromise = require("node:fs/promises");
const r = createReadStream("/bin/cp");
async function test() {
let fd;
try {
console.log("before file open");
fd = await fsPromise.open("/tmp/cp", "w+");
console.log("after file open");
const destination = fd.createWriteStream({ autoClose: false });
console.log("before pipeline");
await pipeline(r, destination);
console.log("done pipeline");
} catch (e) {
console.log("error", e);
} finally {
console.log("before close");
await fd?.close();
console.log("after close");
}
console.log("done");
}
test()
.then(() => console.log("test done"))
.catch((e) => console.log("test error", e));
```
### How often does it reproduce? Is there a required condition?
```bash
node test.js
```
log prints end with "before close", never print "after close"
And If I add this test function to electron main process, await fd?.close() will never return.
### What is the expected behavior? Why is that the expected behavior?
await fd?.close() could return normally when using with createWriteStream({autoClose:false})
### What do you see instead?
```
#> node test.js
before file open
after file open
before pipeline
done pipeline
before close
```
### Additional information
_No response_ | fs | low | Critical |
2,509,434,700 | go | regexp/syntax: character class negation is slow | I've submitted a benchmark and a fix at https://github.com/golang/go/pull/69304.
New benchmark:
```
goos: darwin
goarch: arm64
pkg: regexp/syntax
cpu: Apple M1
BenchmarkString/^(.*);$|^;(.*)-8 4972814 232.1 ns/op 56 B/op 3 allocs/op
BenchmarkString/(foo|bar$)x*-8 5602014 212.6 ns/op 56 B/op 3 allocs/op
BenchmarkString/[^=,]-8 21872083 53.84 ns/op 8 B/op 1 allocs/op
BenchmarkString/([^=,]+)=([^=,]+)-8 5726475 207.3 ns/op 56 B/op 3 allocs/op
BenchmarkString/([^=,]+)=([^=,]+),.*-8 4588252 259.2 ns/op 56 B/op 3 allocs/op
PASS
ok regexp/syntax 7.603s
```
### Go version
1.23 darwin/arm64
### What did you do?
I received a report at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6911 and ran a benchmark for `Regexp.String()`.
### What did you see happen?
Some regexes are extremely slow, even when they're simple. The negate `[^]` causes `calcFlags` to run over a large character space to find a fold case.
```bash
goos: darwin
goarch: arm64
pkg: regexp/syntax
cpu: Apple M1
BenchmarkString/^(.*);$|^;(.*)-8 4594401 253.2 ns/op 56 B/op 3 allocs/op
BenchmarkString/(foo|bar$)x*-8 5006730 236.1 ns/op 56 B/op 3 allocs/op
BenchmarkString/[^=,]-8 256 4227434 ns/op 8 B/op 1 allocs/op
BenchmarkString/([^=,]+)=([^=,]+)-8 151 8032660 ns/op 56 B/op 3 allocs/op
BenchmarkString/([^=,]+)=([^=,]+),.*-8 146 8095255 ns/op 56 B/op 3 allocs/op
PASS
ok regexp/syntax 9.011s
``` | Performance,NeedsInvestigation,FixPending | low | Major |
2,509,447,067 | pytorch | DISABLED test_inplace_custom_op_intermediate (__main__.InplacingTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inplace_custom_op_intermediate&suite=InplacingTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29752449771).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inplace_custom_op_intermediate`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_perf.py", line 1016, in test_inplace_custom_op_intermediate
self.assertExpectedInline(count_numel(f, x, out), """21""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2925, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 351, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 316, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 388, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '21' != '0'
- 21
+ 0
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/inductor/test_perf.py InplacingTests.test_inplace_custom_op_intermediate
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,509,454,053 | godot | Can't import specific DDS textures | ### Tested versions
4.3
### System information
Godot v4.3.stable.mono - Linux Mint 21.3 (Virginia) - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (RADV NAVI23) - AMD Ryzen 7 5800X3D 8-Core Processor (16 Threads)
### Issue description
I have some DDS textures which don't import properly in Godot, but work in GIMP and Tacent View. These textures are also grouped with other DDS textures that import fine.
Trying to import one of the textures results in this error.
```
ERROR: Expected Image data size of 256x256x1 (DXT5 RGBA8 with 8 mipmaps) = 87408 bytes, got 87360 bytes instead.
```
### Steps to reproduce
Import the texture, and try to open them in the inspector or use them in a script
### Minimal reproduction project (MRP)
[textures.zip](https://github.com/user-attachments/files/16902450/textures.zip)
This has one of the functional textures, and one of the 'broken' ones. | enhancement,topic:import | low | Critical |
2,509,512,917 | three.js | WebGPURenderer with WebGPU Backend: Clear color is not set correctly | ### Description
Clear Color is not set correctly when clear alpha < 1.
Compare `WebGPURenderer` with both `WebGLRenderer` and `WebGPURenderer/WebGLBackend`. The latter two appear to render the same, and I think they are correct.
### Reproduction steps
Set clear alpha to less than 1.
### Code
See fiddles.
### Live example
WebGLRenderer (dev): https://jsfiddle.net/ofyu03vc/
WebGPURenderer (dev): https://jsfiddle.net/fbk0d8s9/
### Screenshots
_No response_
### Version
r169dev
### Device
Desktop
### Browser
Chrome
### OS
MacOS | Needs Investigation | low | Minor |
2,509,551,282 | go | net: TestLookupGoogleHost failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestLookupGoogleHost"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737575885378853953)):
=== RUN TestLookupGoogleHost
lookup_test.go:408: lookup google.com: no such host
--- FAIL: TestLookupGoogleHost (11.11s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,509,556,544 | vscode | Sticky scroll "Max line count" setting affects window position | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
# Sticky scroll "Max line count" setting affects window position
## Summary
When updating to version 1.93.0, the Editor > Sticky Scroll: Max Line Count setting affects the position of the window by keeping the cursor in the center of the editor.
This also affects navigation with the up/down arrow keys.
I suspect this to be caused by 52b2f67e96b4e757e8ff8e1e009dfffd70da91f5.
- VS Code Version: 1.93
- OS Version: Windows 10
## Screen recording showing the bug
This video shows the bug in action, and proves that the "Max line count" sticky scroll setting is responsible.
This was recorded on vscode.dev on version 1.93.0.
[vscode 1.93 sticky scroll bug.webm](https://github.com/user-attachments/assets/4b2ee8cf-276c-4b8b-b517-6a690903e852)
## Steps to Reproduce:
1. Open a file with many indentation levels. Make sure Editor > Sticky Scroll: Enabled is toggled and Editor > Sticky Scroll: Max Line Count set to 15.
2. Scroll down until there are >5 lines stuck to the top.
3. Try to type something in the highest non-sticky line, and try moving up and down with the arrow keys.
<details>
<summary>Example file</summary>
<code>{
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {
"a": {}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
</code>
</details>
## Expected result:
The window will stay where it is and only move if the cursor is outside the viewport.
## Observed result:
The window will move to place the cursor in the very center of the screen. | bug,editor-sticky-scroll | low | Critical |
2,509,587,928 | ui | [bug]: Cannot initialize components.json using `npx shadcn@latest init` after manual installation of shadcn (for my React app) | ### Describe the bug
I cannot intialize the component.json by following manual installation followed by `npx shadcn@latest init`.
### Affected component/components
components.json
### How to reproduce
1. Install shadcn from - https://ui.shadcn.com/docs/installation/manual
2. Setup components.json - https://ui.shadcn.com/docs/components-json
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
npx shadcn@latest init
⠙
✔ Preflight checks.
✖ Verifying framework.
We could not detect a supported framework at /my-app/frontend.
Visit https://ui.shadcn.com/docs/installation/manual to manually configure your project.
Once configured, you can use the cli to add components.
```
### System Info
```bash
GitHub Workspaces
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,509,594,886 | stable-diffusion-webui | [Bug]: TypeError: AsyncConnectionPool.__init__() got an unexpected keyword argument 'socket_options' | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
/content
env: TF_CPP_MIN_LOG_LEVEL=1
51 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does not seem to provide it (sources.list entry misspelt?)
env: LD_PRELOAD=/content/libtcmalloc_minimal.so.4
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
libcairo2-dev is already the newest version (1.16.0-5ubuntu2).
pkg-config is already the newest version (0.29.2-1ubuntu3).
aria2 is already the newest version (1.36.0-1).
python3-dev is already the newest version (3.10.6-1~22.04.1).
0 upgraded, 0 newly installed, 0 to remove and 51 not upgraded.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path 'stable-diffusion-webui' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/embeddings/negative' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/models/Lora/positive' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
555fe1|OK | 0B/s|/content/stable-diffusion-webui/models/ESRGAN/4x-UltraSharp.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
--2024-09-06 05:59:24-- https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 533 [text/plain]
Saving to: ‘/content/stable-diffusion-webui/scripts/run_n_times.py’
/content/stable-dif 100%[===================>] 533 --.-KB/s in 0s
2024-09-06 05:59:24 (37.4 MB/s) - ‘/content/stable-diffusion-webui/scripts/run_n_times.py’ saved [533/533]
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/stable-diffusion-webui-huggingface' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-civitai-browser' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-webui-additional-networks' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-webui-controlnet' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/openpose-editor' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-webui-depth-lib' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/posex' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-webui-3d-open-pose-editor' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-webui-tunnels' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/batchlinks-webui' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/stable-diffusion-webui-catppuccin' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/stable-diffusion-webui-rembg' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/stable-diffusion-webui-two-shot' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/sd-webui-aspect-ratio-helper' already exists and is not an empty directory.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: destination path '/content/stable-diffusion-webui/extensions/asymmetric-tiling-sd-webui' already exists and is not an empty directory.
/content/stable-diffusion-webui
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
HEAD is now at f865d3e1 add changelog for 1.4.1
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
HEAD is now at cf1d67a Update modelcard.md
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
1024ca|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11e_sd15_ip2p_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
347dcb|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11e_sd15_shuffle_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
cd5f42|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_canny_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
ca2576|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11f1p_sd15_depth_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
a48655|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_inpaint_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
dfb597|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_lineart_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
f22a80|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_mlsd_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
397dc5|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_normalbae_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
708b46|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_openpose_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
7637f7|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_scribble_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
f6a8a9|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_seg_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
e6d27f|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_softedge_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
aa348b|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15s2_lineart_anime_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
8ac149|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11f1e_sd15_tile_fp16.safetensors
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
05a30c|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11e_sd15_ip2p_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
28a747|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11e_sd15_shuffle_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
3e8f60|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_canny_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
c2b50d|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11f1p_sd15_depth_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
e2c962|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_inpaint_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
7dcf73|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_lineart_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
45d18d|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_mlsd_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
b58cf9|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_normalbae_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
f50a20|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_openpose_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
035707|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_scribble_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
7dc879|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_seg_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
732b82|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15_softedge_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
59c20b|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11p_sd15s2_lineart_anime_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
57d782|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v11f1e_sd15_tile_fp16.yaml
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
5215f9|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_style_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
b3f046|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_sketch_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
ee8964|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_seg_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
bebf0f|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_openpose_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
28d091|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_keypose_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
9a2864|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_depth_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
c2538a|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_color_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
3feadd|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_canny_sd14v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
e5199d|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_canny_sd15v2.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
aae782|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_depth_sd15v2.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
3bc04e|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_sketch_sd15v2.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
986c25|OK | 0B/s|/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/t2iadapter_zoedepth_sd15v1.pth
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
8fea2c|OK | 0B/s|/content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Status Legend:
(OK):download completed.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: ## 1.4.1
Commit hash: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
Installing requirements
ControlNet init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
ERROR: ld.so: object '/content/libtcmalloc_minimal.so.4' from LD_PRELOAD cannot be preloaded (file too short): ignored.
Launching Web UI with arguments: --listen --xformers --enable-insecure-extension-access --theme dark --gradio-queue --multiple
2024-09-06 05:59:58.551432: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-06 05:59:58.589830: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-06 05:59:58.600425: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-06 06:00:00.268915: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
File "/content/stable-diffusion-webui/launch.py", line 40, in <module>
main()
File "/content/stable-diffusion-webui/launch.py", line 36, in main
start()
File "/content/stable-diffusion-webui/modules/launch_utils.py", line 340, in start
import webui
File "/content/stable-diffusion-webui/webui.py", line 35, in <module>
import gradio
File "/usr/local/lib/python3.10/dist-packages/gradio/__init__.py", line 3, in <module>
import gradio.components as components
File "/usr/local/lib/python3.10/dist-packages/gradio/components.py", line 55, in <module>
from gradio import processing_utils, utils
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 339, in <module>
class AsyncRequest:
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 358, in AsyncRequest
client = httpx.AsyncClient()
File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1397, in __init__
self._transport = self._init_transport(
File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1445, in _init_transport
return AsyncHTTPTransport(
File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 275, in __init__
self._pool = httpcore.AsyncConnectionPool(
TypeError: AsyncConnectionPool.__init__() got an unexpected keyword argument 'socket_options'
### Steps to reproduce the problem
TypeError: AsyncConnectionPool.__init__() got an unexpected keyword argument 'socket_options'
### What should have happened?
1
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
1
### Console logs
```Shell
1
```
### Additional information
_No response_ | bug-report | low | Critical |
2,509,634,156 | node | Add a cleaned up `child_processes` API that cleans the API up for child process spawning, like how `fs/promises` cleaned up `fs` | ### What is the problem this feature will solve?
`child_process` is, in my experience, one of the most commonly wrapped APIs by far. It's especially common to wrap it in promise environments.
- [Execa](https://www.npmjs.com/package/execa) currently has 60 million weekly downloads.
- [`cross-spawn`](https://www.npmjs.com/package/cross-spawn) has over 50 million weekly downloads.
- [GitHub even created their own wrapper for use in their Actions product.](https://github.com/actions/toolkit/tree/main/packages/exec#readme)
It all stems from a few major issues:
1. Child processes have several one-time events, and none of them map cleanly to just one promise or even one promise sequence.
2. It's non-trivial to create a wrapper that interops with streams more seamlessly.
3. It's inherently error-prone to use directly, especially when it comes to shell script invocation on Windows.
### What is the feature you are proposing to solve the problem?
I'm thinking of the following API, in `child_process/promises`:
- `result = await promises.exec(command, args?, options?)` to spawn a normal command
- `result = await promises.system(command, args?, options?)` to spawn a shell command
- `result = await promises.fork(command, args?, options?)` to spawn a child with an IPC channel
Options and arguments:
- `command` is what to run.
- `exec`: the binary to run, may be a `file:` URL
- `system`: the shell script to run
- `fork`: the Node script to run, may be a `file:` URL
- `args` is an array of arguments to pass to the script or command and it defaults to the empty array.
- Everything is fully escaped as needed, matching [`cross-spawn`](https://www.npmjs.com/package/cross-spawn)'s behavior.
- Most of the usual `options` properties still work, with the same defaults:
- `options.detached`
- `options.cwd`
- `options.env`
- `options.argv0`
- `options.uid`
- `options.gid`
- `options.signal` is now an object, where keys are the signal names and values are `AbortSignal`s and async iterables that can trigger them.
- `options.ref` determines whether the process starts out ref'd.
- `options.execPath` for `system` and `fork` and represents the path to use. Unlike in `child_process.spawn`, this is not a complete command. Defaults:
- `system`: `"sh"` in \*nix, `process.env.COMSPEC || "cmd.exe"` on Windows
- `fork`: `"node"`
- `options.execArgv` provides the list of arguments to pass before passing the script. Defaults:
- `system`: `["-c"]` on \*nix, `["/d", "/s", "/c"]` on Windows
- `fork`: `[]`
- Set `options.pathLookup` to `true` (default) to use the system path to locate the target binary, `false` to resolve it based on the current working directory. On Unix-like systems, `true` also enables interpreters to work.
- For `system` and `fork`, this is always set to `true` and cannot be configured.
- On Windows, this also enables `%PathExt%` traversal.
- This is useful independently of shell access. Plus, Linux does `lookupPath: true` natively with [`execve`](https://man7.org/linux/man-pages/man2/execve.2.html).
- `options.fds` is an object where each numerical index corresponds to a descriptor to set in the child. This is *not* necessarily an array, though one could be passed. Default is `{0: 0, 1: 1, 2: 2}` to inherit those descriptors. Possible entry values:
- `"close"`: explicitly close FD, cannot be used for FD 0/1/2
- `"null"`: connect to the system null device
- `MessagePort` instance (`fork` only): open an IPC port
- These ports are transferred, so the internal code can just send directly to the linked port.
- When the child's IPC channel closes, the linked `MessagePort` on the other side of the channel is also closed in the same way it is for workers where one end closes.
- Numeric FD, `fs/promises` file handle, `net.Socket`, etc: Pass a given file descriptor directly
- `readableStream`: Expose a writable pipe and read from it using the given stream
- Use `BufferReader` to read from buffers and strings
- `writableStream`: Expose a readable pipe and write into it using the given stream
- Use `BufferWriter` to write into buffers
- Set `options.fds.inherit` to `true` to inherit all FDs not specified in `options.fds`. Default is `false`, in which FDs 0/1/2 are opened to the null device and all others are closed.
The return value is a Promise that settles when the child terminates.
- It resolves to an empty object if the command exited normally with a code of 0.
- It throws an error with `exitCode` and `signalCode` properties if it exited with any other code.
- `pid = await handle.spawned` resolves with the PID on spawn and rejects on spawn error.
Additional classes in `stream`:
- `writer = new BufferReader(target | max)`
- Extends `stream.Writable`
- Pass either a `target` buffer source to fill or a `max` byte length
- `writer.bytesWritten`: Get the number of bytes written
- `writer.consume()`: Reset the write state and return the previously written buffer data. If it's not writing to an external target, it's possible to avoid the buffer copy.
- `writer.toString(encoding?)` is sugar for `writer.consume().toString(encoding?)`
- It's recommended to implement this as an alternate stream mode to reduce memory usage and vastly improve performance.
- `reader = new BufferWriter(source, encoding?)`
- Extends `stream.Readable`
- Pass a `source` string (with optional encoding) or buffer source to read from
- `reader.bytesRead`: Get the number of bytes read
- `Readable.from(buffer | string)` should return instances of this instead
- It's recommended to implement this as an alternate stream mode to reduce memory usage and vastly improve performance.
- `duplex.reader()`, `duplex.writer()`: Return the read or write half of a duplex stream, sharing the same internal state
And in `process`:
- `port = process.ipc(n=3)`: Get a (cached) `MessagePort` for a given IPC descriptor, throwing if it's not a valid descriptor.
- Having multiple IPC ports can be useful for delimiting control messages from normal messages.
- `result = await process.inspectFD(n)` accepts an FD and returns its type and read/write state.
- File: `{kind: "file", readable, writable}`
- On \*nix, `readable` and `writable` can be determined via `fcntl(F_GETFL, fd)` on \*nix (it's been in the POSIX standard for a couple decades)
- On Windows, `readable` and `writable` can be determined via two calls to [`ReOpenFile`](https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-reopenfile) or one call to [`NtQueryObject` with class `ObjectBasicInformation`](https://learn.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryobject). (They say it can change, but it may be possible to get a stability promise out of them since the page hasn't been modified in over 6 years.)
- Socket: `{kind: "socket", readable, writable, type: "stream-client" | "stream-server" | "dgram"}`
- Terminal stream: `{kind: "tty", readable, writable, rows, columns}`
- This can be tested and extracted in one `ioctl` syscall on Linux
- IPC port: `{kind: "ipc", readable, writable}`
- Other: `{kind: "unknown"}`
- \*nix pipes are reported as being of this type
- This is useful in conjunction with systemd for verifying that a given FD is actually open before attempting to use it.
Things I'm intentionally leaving out:
- `options.serialization` - it's always `"advanced"`. This both brings it to close parity with other `MessagePort`-related APIs, and it speeds up message sending since it's already of the correct input format.
- `options.timeout` - just do `signal: {SIGTERM: AbortSignal.timeout(ms)}`.
- `options.windowsHide` - that behavior is just always on as that's what people would generally just expect.
- `options.windowsVerbatimArguments` - just use `system` and string concatenation.
- Per-FD `"inherit"` constants in `stdio` - you can just use the descriptor numbers themselves for that.
- `"pipe"` - use a passthrough stream for that.
- `options.encoding` - that's a per-descriptor setting now.
- A global `"close"` event - it's better to track that per-stream anyways. Plus, it's one of those incredibly error-prone points.
For a summary in the form of TypeScript definitions:
```ts
declare module "child_process/promises" {
export interface WaitError extends Error {
exitCode: number | undefined
signalCode: string | undefined
}
type SignalSource =
| AbortSignal
| AsyncIterable<void>
type SignalMap = {[Signal in NodeJS.Signals]?: SignalSource}
type FdSource =
| "close"
| "null"
| MessagePort
| number
| import("node:net").Socket
| import("node:fs").FileHandle
| Readable | Writable
interface FdMap {
[fd: number]: FdSource
inherit?: boolean
}
interface Options {
detached?: boolean
cwd?: string
env?: Record<string, string>
argv0?: string
uid?: number
gid?: number
signal: SignalMap
ref?: boolean
pathLookup?: boolean
execPath?: string
execArgv?: string
fds: FdMap
}
interface ProcessHandle extends Promise<void> {
readonly spawned: Promise<number>
}
export function exec(command: string | URL, options: Options): ProcessHandle
export function exec(command: string | URL, args?: string[], options?: Options): ProcessHandle
export function system(command: string, options: Options): ProcessHandle
export function system(command: string, args?: string[], options?: Options): ProcessHandle
export function fork(command: string | URL, options: Options): ProcessHandle
export function fork(command: string | URL, args?: string[], options?: Options): ProcessHandle
}
declare module "stream" {
declare class BufferReader extends Readable {
constructor(source: BufferSource)
constructor(source: string, encoding?: NodeJS.BufferEncoding)
readonly bytesRead: number
}
declare class BufferWriter extends Writable {
constructor(target: BufferSource)
constructor(maxBytes: numbers)
readonly bytesWritten: number
consume(): Buffer
toString(encoding?: NodeJS.BufferEncoding): string
}
interface Duplex {
reader(): Readable
writer(): Writable
}
}
declare module "process" {
export type InspectFDResult =
| {kind: "file", readable: boolean, writable: boolean}
| {kind: "socket", readable: true, writable: true, type: "stream-client" | "stream-server" | "dgram"}
| {kind: "tty", readable: boolean, writable: boolean, rows: number, columns: number}
| {kind: "ipc", readable: false, writable: false}
| {kind: "unknown", readable: false, writable: false}
export function ipc(fd?: number): MessagePort
export function inspectFD(fd?: number): Promise<InspectFDResult>
}
```
### What alternatives have you considered?
I considered:
- Simple handles returned from a promise that resolves on spawn. It could have methods are `.ref()`, `.unref()`, `.pid`, `.wait()`, and `.raise(signal?)`. The main problem is this, for the common case, requires `await start(...).then(h => h.wait())`.
- Captuing stderr in the error message. You may want to pass it through (very common), and it could be extremely long. There are ways to work around this, but it's simpler to just not capture it.
- `handle.ipc` as a single port. I don't see why one can't have multiple IPC ports, and it also simplifies the API *and* the implementation.
- Something like Execa. This is just too complicated to justify the effort.
- The status quo. It's consistently very awkward, hence the feature request. | child_process,feature request | low | Critical |
2,509,648,183 | pytorch | DISABLED test_inplace_custom_op (__main__.InplacingTests) | Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inplace_custom_op&suite=InplacingTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29761170872).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inplace_custom_op`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "inductor/test_perf.py", line 985, in test_inplace_custom_op
self.assertExpectedInline(count_numel(f, x, out), """21""")
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 2925, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/expecttest/__init__.py", line 351, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/expecttest/__init__.py", line 316, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/expecttest/__init__.py", line 388, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.8/lib/python3.8/unittest/case.py", line 1292, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.8/lib/python3.8/unittest/case.py", line 753, in fail
raise self.failureException(msg)
AssertionError: '21' != '0'
- 21
+ 0
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py InplacingTests.test_inplace_custom_op
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,509,653,176 | vscode | Do not disable `local/code-import-patterns` ESLint rule | There should exist no reason to disable the `local/code-import-patterns`. You either need to move things into the right layer or update the configuration of the rule
- [x] src/vs/base/parts/request/test/browser/request.test.ts @chrmarti
6,1: // eslint-disable-next-line local/code-import-patterns
8,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/editor/standalone/browser/standaloneTreeSitterService.ts @alexr00
6,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/platform/terminal/common/capabilities/commandDetection/promptInputModel.ts @Tyriar
13,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/platform/terminal/common/capabilities/commandDetection/terminalCommand.ts @Tyriar
10,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/platform/terminal/common/capabilities/bufferMarkCapability.ts @Tyriar
10,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/platform/terminal/common/capabilities/commandDetectionCapability.ts @Tyriar
17,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/platform/terminal/common/capabilities/partialCommandDetectionCapability.ts @Tyriar
10,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/platform/terminal/common/xterm/shellIntegrationAddon.ts @Tyriar
15,1: // eslint-disable-next-line local/code-import-patterns
20,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/codeEditor/browser/inspectEditorTokens/inspectEditorTokens.ts @alexr00
35,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminal/browser/xterm/markNavigationAddon.ts @Tyriar
19,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminal/browser/baseTerminalBackend.ts @Tyriar
17,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminal/browser/terminal.contribution.ts @Tyriar
59,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminal/browser/terminalContribExports.ts @Tyriar
8,1: // eslint-disable-next-line local/code-import-patterns
10,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminal/browser/terminalInstance.ts @Tyriar
95,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminal/browser/terminalProcessManager.ts @Tyriar
46,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/terminalContrib/quickFix/browser/quickFixAddon.ts @Tyriar
7,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/contrib/themes/test/node/colorRegistry.releaseTest.ts @aeschli
16,1: // eslint-disable-next-line local/code-import-patterns
- [x] src/vs/workbench/services/treeSitter/browser/treeSitterTokenizationFeature.ts @alexr00
6,1: // eslint-disable-next-line local/code-import-patterns
| important,debt | low | Major |
2,509,670,844 | vscode | Allow chat anchors to point to symbols | For chat anchors, it would be nice to render anchors to symbols in a distinct way and provide special UX for them
My proposal is to extend `ChatResponseAnchorPart` to also allow the `value` to be a `SymbolInformation`:
```ts
export class ChatResponseAnchorPart {
/**
* The target of this anchor.
*
* If this is a {@linkcode Uri} or {@linkcode Location}, this is rendered as a normal link.
*
* If this is a {@linkcode SymbolInformation}, this is rendered as a symbol link.
*/
value: Uri | Location | SymbolInformation;
/**
* An optional title that is rendered with value.
*/
title?: string;
/**
* Create a new ChatResponseAnchorPart.
* @param value The target of this anchor.
* @param title An optional title that is rendered with value.
*/
constructor(value: Uri | Location | SymbolInformation, title?: string);
}
```
I selected `SymbolInformation` as this type clearly identifies that we are pointing to a symbol and not just a location in a file. It also includes some useful metadata for rendering, such as the `kind`.
Note that this proposal is intentionally simple. I'm trying to avoid the whole nest of complexities we'll get into if we shift the burden of resolving symbol locations onto VS Code (i.e. starting with a `uri` and `symbolName` and having VS Code figure out the kind and location of the symbol)
I'll be proposing a way to handle that in a follow up API proposal | feature-request,api-proposal,chat | low | Minor |
2,509,672,093 | PowerToys | Mouse smoothing | ### Description of the new feature / enhancement
Smooth the mouse input using a low pass frequency filter to allow accurate positioning for people with wobbly fingers (like me).
### Scenario when this would be used?
I use a mouse with a ball on top, that I control with my thumb. As a power user, I like to do very accurate pointing (like when drawing). With age, my thumb became wobbly. I'm pretty sure I am not alone.
### Supporting information
Note that the feature where the speed of the mouse influences the multiplication factor of the mouse movement (allowing to flick quickly over bigger distances) should still work, otherwise noone is going to be using it, without understanding why.
Some experimenting may be needed to get the parameters right. | Needs-Triage | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.