id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,512,771,269 | deno | Add a `serve` field to `deno.json` to specify the serve entry | How about adding a new field `serve` to `deno.json` to specify the serve entry?
```diff
// deno.json
{
+ "serve": "source/app.ts"
}
```
After this users should be able to start the service directly by running `deno serve`.
This is more convenient than a `deno task serve`. We can also add `deno serve` optional support in `deno.json` like `deno fmt` does.
What do you think? | suggestion | low | Minor |
2,512,779,596 | vscode | Exploration on how to improve the diff algorithm | null | feature-request,notebook-diff,exploration | low | Minor |
2,512,840,447 | pytorch | Why does using nn.DataParallel add a dimension after the nn.Parameter parameter? Then it results in a dimension mismatch and an error when multiplying. | ### ๐ Describe the bug
class RFNO(nn.Module):
def __init__(self, out_channels, modes1, modes2):
super(RFNO, self).__init__()
self.out_channels = out_channels
self.modes1 = modes1
self.modes2 = modes2
self.scale = (1 / out_channels)
self.weights0 = self.scale * torch.rand(1, out_channels, 1, 1, dtype=COMPLEXTYPE)
self.weights1 = self.scale * torch.rand(1, out_channels, self.modes1, self.modes2, dtype=COMPLEXTYPE)
self.weights2 = self.scale * torch.rand(1, out_channels, self.modes1, self.modes2, dtype=COMPLEXTYPE)
self.weights0 = nn.Parameter(self.weights0)
self.weights1 = nn.Parameter(self.weights1)
self.weights2 = nn.Parameter(self.weights2)
# Complex multiplication
def compl_mul2d(self, input, weights):
return torch.einsum("bixy,ioxy->boxy", input, weights)
def forward(self, x):
batchsize = x.shape[0]
# Move weights to the same device as input `x`
weights0 = self.weights0.to(x.device)
weights1 = self.weights1.to(x.device)
weights2 = self.weights2.to(x.device)
x_ft = torch.fft.rfft2(x)
x_ft = x_ft * weights0
# Multiply relevant Fourier modes
out_ft = torch.zeros(batchsize, self.out_channels, x.size(-2), x.size(-1)//2 + 1, dtype=COMPLEXTYPE, device=x.device)
out_ft[:, :, :self.modes1, :self.modes2] = self.compl_mul2d(x_ft[:, :, :self.modes1, :self.modes2], weights1)
out_ft[:, :, -self.modes1:, :self.modes2] = self.compl_mul2d(x_ft[:, :, -self.modes1:, :self.modes2], weights2)
# Return to physical space
print(out_ft.shape)
x = torch.fft.irfft2(out_ft, s=(x.size(-2), x.size(-1)))
return x
### Versions
When I'm not using nn.DataParallel, the shape of each parameter is as follows๏ผ
#weights0.shape, weights1.shape, weights2.shape = torch.Size([1, 64, 1, 1]) torch.Size([1, 64, 32, 32]) torch.Size([1, 64, 32, 32])
# x.shape = torch.Size([32, 1, 320, 320])
# x_ft.shape = torch.Size([32, 1, 320, 161])
When I'm using nn.DataParallel, the shape of each parameter is as follows๏ผ
#weights0.shape, weights1.shape, weights2.shape = torch.Size([1, 64, 1, 1,2]) torch.Size([1, 64, 32, 32,2]) torch.Size([1, 64, 32, 32,2])
# x.shape = torch.Size([32, 1, 320, 320])
# x_ft.shape = torch.Size([32, 1, 320, 161])
Then it results in a dimension mismatch and an error when multiplying๏ผ
Original Traceback (most recent call last):
File "/home/user/anaconda3/envs/opc/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in _worker
output = module(*input, **kwargs)
File "/home/user/anaconda3/envs/opc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/anaconda3/envs/opc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/opc/lithobench-DP/lithobench/litho/doinn.py", line 168, in forward
br0 = self.rfno(F.avg_pool2d(x, kernel_size=8, stride=8))
File "/home/user/anaconda3/envs/opc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/anaconda3/envs/opc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/opc/lithobench-DP/lithobench/litho/doinn.py", line 124, in forward
x_ft = x_ft * weights0
RuntimeError: The size of tensor a (161) must match the size of tensor b (2) at non-singleton dimension 4 | triaged,module: data parallel | low | Critical |
2,512,868,123 | rust | `rustdoc::broken_intra_doc_links` won't catch obvious invalid links | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
Lint `rustdoc::broken_intra_doc_links` warns by default, but it cannot catch obvious invalid links in the `fanotify` module of the Nix crate.
> See the section `Steps to reproduce` for how to reproduce this behavior.
I expected to see this happen: *explanation*
Invalid links will be caught
Instead, this happened: *explanation*
They are not caught
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
$ rustc --version --verbose
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: aarch64-apple-darwin
release: 1.80.1
LLVM version: 18.1.7
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
no backtrace
```
</p>
</details>
#### Context:
Today I found some invalid links in the Nix crate, and fixed them in this PR https://github.com/nix-rust/nix/pull/2493, then I am surprised that they weren't caught by this lint
#### Steps to reproduce
```sh
$ git clone https://github.com/nix-rust/nix.git
$ cd nix
$ git reset --hard 82301035e4af3ca7903ba1abaf1955b2de61c8d5 # This is a commit where the invalid links exist
$ cargo doc --target x86_64-unknown-linux-gnu --all-features
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.03s
Generated /Users/steve/Documents/workspace/test/target/x86_64-unknown-linux-gnu/doc/nix/index.html
$ echo $?
0
```
| T-rustdoc,A-lints,C-feature-request,A-intra-doc-links | low | Critical |
2,512,902,343 | godot | Editor crash while importing resources. | ### Tested versions
Tested versions: 4.4.dev and 4.3 stable
### System information
Windows 10
### Issue description
Editor crash while importing assets for the first time, ie. `.godot` folder is created for first time.
Try to delete the `.godot` folder and open it in `4.2` then close and open again in `4.3` or `4.4` and it should work fine.
### Steps to reproduce
Open the minimal reproduction project and it will crash.
Maybe this is the issue. I have tested with another custom build after updating it to `4.4.dev`.

### Minimal reproduction project (MRP)
[PixelDesigner.zip](https://github.com/user-attachments/files/16925427/PixelDesigner.zip) | bug,topic:gdscript,crash | low | Critical |
2,513,093,046 | vscode | Closing a terminal that was reopen in a new window doesn't dispose the terminal object | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0
- OS Version: Darwin arm64 or Windows 11
Steps to Reproduce:
1. Open a terminal through vscode API `vscode.window.createTerminal` and keep a reference to the terminal object. (In my use case I have a running task in the terminal that runs until terminal is close)
2. Move the terminal into a new Window. Close the terminal by closing the new window. The terminal will be moved to first vscode window. Closing this one again will show a terminate process dialog message.
3. The terminal being kept as object shows that the terminal still exist in the `vscode.window.terminals` and the exitCode is undefined.
I would expect the terminal to be disposed or exit code to be updated.
Note: If I right click the Terminal in new window and choose the `Close` button the Terminal is disposed properly.
| bug,confirmation-pending,terminal-process | low | Critical |
2,513,149,294 | create-react-app | Vulnerability Issues with postcss and nth-check in react-scripts Dependencies | I am encountering a persistent vulnerability issue with react-scripts related to the nth-check package in our prismacloudscan.
Despite making multiple attempts to update the dependencies manually and exploring various resolutions, the vulnerability warning remains.
Node JS version: we tried all possible node versions like node v16.20.2,v18.90, v19.81. and v20.16.0.
React version : 18.2.0
Methods we tried to fix the issue.
1) we are using latest react-scripts verison : 5.0.1,
2) we manually installed latest version of postcss and nth-check
3) we tried to override both dependencies in package.json
"overrides": {
"react-scripts": {
"postcss":"8.4.31",
"nth-check":"2.0.1"
}
} and
"overrides": {
"postcss":"8.4.31",
"nth-check":"2.0.1"
}
we tried nth-check latest version 2.1.1 in overrides and 8.4.45.
we tried npm audit and got following results :
nth-check <2.0.1
Severity: high
Inefficient Regular Expression Complexity in nth-check - https://github.com/advisories/GHSA-rp65-9cf3-cjxr
fix available via `npm audit fix --force`
Will install react-scripts@3.0.1, which is a breaking change
node_modules/svgo/node_modules/css-select/node_modules/nth-check
css-select <=3.1.0
Depends on vulnerable versions of nth-check
node_modules/svgo/node_modules/css-select
svgo 1.0.0 - 1.3.2
Depends on vulnerable versions of css-select
node_modules/svgo
@svgr/plugin-svgo <=5.5.0
Depends on vulnerable versions of svgo
node_modules/@svgr/plugin-svgo
@svgr/webpack 4.0.0 - 5.5.0
Depends on vulnerable versions of @svgr/plugin-svgo
node_modules/@svgr/webpack
react-scripts >=2.1.4
Depends on vulnerable versions of @svgr/webpack
Depends on vulnerable versions of resolve-url-loader
node_modules/react-scripts
postcss <8.4.31
Severity: moderate
PostCSS line return parsing error - https://github.com/advisories/GHSA-7fh5-64p2-3v2j
fix available via `npm audit fix --force`
Will install react-scripts@3.0.1, which is a breaking change
node_modules/resolve-url-loader/node_modules/postcss
resolve-url-loader 0.0.1-experiment-postcss || 3.0.0-alpha.1 - 4.0.0
Depends on vulnerable versions of postcss
node_modules/resolve-url-loader
request *
Severity: moderate
Server-Side Request Forgery in Request - https://github.com/advisories/GHSA-p8p7-x288-28g6
Depends on vulnerable versions of tough-cookie
fix available via `npm audit fix --force`
Will install jest@29.7.0, which is a breaking change
node_modules/request
jsdom 0.1.20 || 0.2.0 - 16.5.3
Depends on vulnerable versions of request
Depends on vulnerable versions of request-promise-native
Depends on vulnerable versions of tough-cookie
node_modules/zem/node_modules/jsdom
jest-environment-jsdom 10.0.2 - 25.5.0
Depends on vulnerable versions of jsdom
node_modules/zem/node_modules/jest-environment-jsdom
request-promise-core *
Depends on vulnerable versions of request
node_modules/request-promise-core
request-promise-native >=1.0.0
Depends on vulnerable versions of request
Depends on vulnerable versions of request-promise-core
Depends on vulnerable versions of tough-cookie
node_modules/request-promise-native
now, we tried npm audit fix --force Still issue not resolved, please update react-scripts dependency it was updated 2 years ago
| needs triage | low | Critical |
2,513,169,165 | PowerToys | Viewing Issues | ### Microsoft PowerToys version
0.84.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
The UI Popup of Color Picker, File Renamer, File Resizer is now showing properly.
[SEE THE VIDEO](https://drive.google.com/file/d/1A31sbGvskMSYb-LZ7lHoFGtZtAW89ig9/view?usp=drive_link)
```[tasklist]
### Tasks
```
| Issue-Bug,Needs-Triage | low | Minor |
2,513,171,505 | vscode | Snippets: False usage of variable creates regex parsing error | Type: <b>Bug</b>
Good day,
I have encountered that false usage of variables in snippets can lead to side effects in regex parsing.
This is a minimized example of the actual used snippet. The snippet's last line is incorrect since the variable does not exist.
The regex used here creates a header guard of the current file in upper case while the directory until "Source" is cut.
```
"C++ header guard": {
//"scope": "C,C++",
"prefix": "test_guard",
"body": [
"// Define to prevent recursive inclusion -------------------------------------",
"",
"#ifndef ${TM_FILEPATH/(?:^.*\\\\Source\\\\)?(\\w+)\\W?/${1:/upcase}_/g}",
"#define ${TM_FILEPATH/(?:^.*\\\\Source\\\\)?(\\w+)\\W?/${1:/upcase}_/g}",
"",
"//!< @todo ${file_name} description",
],
"description": "C++ header guard",
//"isFileTemplate": true
}
```
This expected result for the test file is:
```
// Define to prevent recursive inclusion -------------------------------------
#ifndef COMMON_CONFIGURATION_TEST_HPP_
#define COMMON_CONFIGURATION_TEST_HPP_
//!< @todo file_name description
```
but for whatever reason, comma seperators are inserted.
```
// Define to prevent recursive inclusion -------------------------------------
#ifndef COMMON,_CONFIGURATION,_TEST,_HPP,_
#define COMMON,_CONFIGURATION,_TEST,_HPP,_
//!< @todo file_name description
```
Removing the wrong usage of the variable in the last line of the snippet also removes this issue.
But it is very unexpected that the regex result depends on a completely different line in the snippet. It took me a long time to catch this dependency believing the used regex was false.
I would be happy if you can have a look if this dependency can be removed for developers falling into the same trap in the future.
VS Code version: Code 1.93.0 (4849ca9bdf9666755eb463db297b69e5385090e3, 2024-09-04T13:02:38.431Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-1260P (16 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.71GB (18.94GB free)|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (33)</summary>
Extension|Author (truncated)|Version
---|---|---
gitlens|eam|15.4.0
boost-test-adapter-xnely|Eri|3.2.8
jenkins-pipeline-linter-connector|jan|1.2.0
hex-fmt|ker|1.0.0
jenkins-doc|Maa|1.7.0
cortex-debug|mar|1.12.1
debug-tracker-vscode|mcu|0.0.15
memory-view|mcu|0.0.25
peripheral-viewer|mcu|1.4.6
rtos-views|mcu|0.0.7
rainbow-csv|mec|3.12.0
git-graph|mhu|1.30.0
debugpy|ms-|2024.10.0
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.2
jupyter|ms-|2024.8.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.114.1
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.2
cmake-tools|ms-|1.19.51
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
remote-explorer|ms-|0.4.3
rust-analyzer|rus|0.3.2104
code-spell-checker|str|3.0.1
cmake|twx|0.0.17
vscode-lldb|vad|1.10.0
vscode-icons|vsc|12.8.0
vscode-proto3|zxh|0.5.5
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
g316j359:31013175
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31104044
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31119336
wkspc-onlycs-t:31132770
wkspc-ranged-c:31125598
fje88620:31121564
```
</details>
<!-- generated by issue reporter --> | bug,snippets | low | Critical |
2,513,211,846 | node | Add a new env var `UV_LOOP_ENABLE_IO_URING_SQPOLL` | libuv disables SQPOLL by default and leaves it to the user/admin to enable/disable. However, I believe it's currently not possible to do this in node.
We should add a check for a `UV_LOOP_ENABLE_IO_URING_SQPOLL` env flag and call `uv_loop_configure` with the libuv flag `UV_LOOP_ENABLE_IO_URING_SQPOLL`.
Refs: https://github.com/libuv/libuv/pull/4492 | libuv | low | Major |
2,513,213,546 | flutter | Integration driver will not show the correct result when testWidget has retries | ### Steps to reproduce
1. Include the test suite `group('', (){ ... }, retry: 2)`
2. run integration test via `flutter drive`
3. If the test failed once but succeed in retry
4. The overall result of test will NOT show `All tests passed.` with `exit(0)`
### Expected results
When test case passed by retry, the result should not print the error and exit(0).
### Actual results
When test case passed by retry, the result will throw errors and exit(1)
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video

### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.16.9, on macOS 14.6.1 23G93 darwin-arm64, locale
en-SG)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.1)
[โ] IntelliJ IDEA Ultimate Edition (version 2024.2.0.1)
[โ] VS Code (version 1.92.2)
```
</details>
| a: tests,tool,framework,t: flutter driver,f: integration_test,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Critical |
2,513,219,715 | transformers | how can calculate the predict score of every pixel use mask2former swin-l model? | ### Feature request
I has download the mask2former swin-l model from huggingface website,
and use example code get segmentation map of image,
the example code is:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
the code can get the seg map, but not pred score of every pixel,
so how to add calculate code to get perd score of every pixel?
### Motivation
not motivation
### Your contribution
a littlecontribution | Feature request,Vision | low | Minor |
2,513,226,603 | PowerToys | Chinese shows abnormality and has existed for a long time | ### Microsoft PowerToys version
0.84.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
Some functions, such as Aways On Top, PowerRename, you have done the translation of the Chinese name, but in the left menu, it is not used, but it is used in the function description, it is very strange, please optimize as soon as possible, and there are some names that do not have a Chinese name, please also name it, otherwise it will be very confusing to communicate with Chinese users


### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,513,350,145 | rust | [BUG] `llvm-cov` warning `mismatched data` when impl const fn | # bug
`llvm-cov` warning `mismatched data` when impl const fn
## reproduce
https://github.com/loynoir/reproduce-rust-130139
```rs
pub use bar::Bar;
mod bar {
pub struct Bar<T>(T);
impl Bar<i32> {
pub const unsafe fn from_unchecked(value: i32) -> Self {
Bar(value)
}
pub const fn get(&self) -> i32 {
self.0
}
}
}
```
## workaround
```rs
pub use bar::{Bar, bar_get_i32};
mod bar {
pub struct Bar<T>(T);
impl Bar<i32> {
pub const unsafe fn from_unchecked(value: i32) -> Self {
Bar(value)
}
pub fn get(&self) -> i32 {
self.0
}
}
pub fn bar_get_i32(bar: &Bar<i32>) -> i32 {
bar.0
}
}
```
## related
`llvm-cov` warning `mismatched data` when double slash comment above `use`
https://github.com/rust-lang/rust/issues/130065
`llvm-cov` warning `mismatched data` when triple slash safety comment above `unsafe fn`
https://github.com/rust-lang/rust/issues/130097
| A-LLVM,T-compiler,C-bug,A-code-coverage,S-has-mcve | low | Critical |
2,513,365,576 | flutter | Cupertino Text Selection Toolbar has wrong position | ### Steps to reproduce
1. Clone this repository: https://github.com/ricardoboss/cupertino_selection_bug
2. Run the app ~~on a physical iOS device (cannot reproduce in Simulator)~~
3. Enter some text in the search bar
4. Hold the text until the magnifier appears
5. Let go of the text (the toolbar should appear at the bottom of the search field)
### Expected results
The toolbar should have its arrow at the top, not at the bottom and it should be positioned a bit lower.
### Actual results
The toolbar touches the edge of the text field and the arrow is on the wrong side.
### Code sample
<details open><summary>Code sample</summary>
```dart
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key});
@override
Widget build(BuildContext context) {
return const CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
middle: Text('Cupertino App'),
),
child: SafeArea(
child: Padding(
padding: EdgeInsets.all(8.0),
child: Column(
children: [
CupertinoSearchTextField(),
Expanded(
child: Center(
child: Text('Cupertino Text Selection Toolbar Bug'),
),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/37cbfb56-e3c5-4842-9044-5c0bade59258
</details>
### Logs
No relevant logs.
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.2, on macOS 14.6.1 23G93 darwin-arm64, locale en-DE)
โข Flutter version 3.24.2 on channel stable at /Users/ricardo/repos/flutter/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 4cf269e36d (5 days ago), 2024-09-03 14:30:00 -0700
โข Engine revision a6bd3f1de1
โข Dart version 3.5.2
โข DevTools version 2.37.2
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at /Users/ricardo/Library/Android/sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: /Users/ricardo/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode-15.4.0.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Users/ricardo/Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] VS Code (version 1.79.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension can be installed from:
๐จ https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[โ] Connected device (4 available)
โข iPadย Pro 12,9โ 6th Gen (RBO) (mobile) โข 00008112-0010094E0181A01E โข ios โข iOS 18.0 22A5350a
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.6.1 23G93 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 128.0.6613.120
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| platform-ios,framework,f: cupertino,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.25 | low | Critical |
2,513,370,909 | ollama | Support tool/tool call ids when multiple tool calls are requested. | ### What is the issue?
When multiple tools are provided it is often the case that Olllama will respond with multiple `tool_calls` to be made. In that case, I am guessing we are expected to answer with as many `{'role': 'tool', 'content': '...'}` messages.
How can then one specify which of these messages corresponds to which tool call? I *think* OpenAI provides some id to each of the tool calls that the responses are supposed to use.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.9 | bug,api | low | Major |
2,513,397,146 | kubernetes | Coding for testcase of function in secret | I would like to know whether it is necessary to write the testcode for the various function from secret.go as many of the function is not having the testcase in secret_test.go I would like to know from the community whether we can proceed ahead for writing the testcase of the function. some example are totalsecretbytes function etc. I think writing test for every function is important and useful . Sometimes i come to know from community after my work that this test and this modification is not necessary or it cannot harm the working of it.
If i can write testcase for every function declared in specific file example secret.go After community confirmation i would like to work on this issue.
Special request from community. Please don't raise the PR until community confirm on working on it.
| sig/storage,lifecycle/rotten,needs-triage | low | Major |
2,513,403,818 | ui | [bug]: Unable to install on Laravel 11.x | ### Describe the bug
Cannot Install shadcn-ui on Laravel Inertia React while using Javascript .jsx
โ Preflight checks.
โ Verifying framework. Found Laravel.
โ Validating Tailwind CSS.
โ Validating import alias.
No import alias found in your tsconfig.json file.
Visit https://ui.shadcn.com/docs/installation/laravel to learn how to set an import alias.
### Affected component/components
npx shadcn@latest init
### How to reproduce
npx shadcn@latest init
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 10 x64
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,513,408,288 | puppeteer | [Feature]: Disable Network.enable | ### Feature description
If I understand correctly, puppeteer sends the `Network.enable` message to chrome pages, always, and as such receives all HTTP request metadata (this is done roughly through `cdp/Page`, `cdp/FrameManager`, `cdp/NetworkManager`).
For a performance sensitive test setup, it would be nice of we can disable the network manager by default, is there a way to do or adjust puppeteer to do so? | feature,confirmed,P3 | low | Major |
2,513,428,225 | flutter | Performance issue with Noto Color Emoji on iOS | ### Steps to reproduce
1. Download the NotoColorEmoji file and add it to the assets.
Location: [Noto Color Emoji](https://fonts.google.com/noto/specimen/Noto+Color+Emoji)
Code: Add the following code to the pubspec.yaml file:
```
fonts:
- family: NotoColorEmoji
fonts:
- asset: asset/font/NotoColorEmoji-Regular.ttf
```
2. Create a view that includes emojis as shown in the code below.
When building a new screen, the rendering appears to be extremely slow *only on iOS*.
After popping the screen and pushing it again, the rendering speed improves.
This issue appears to be related to the following known issues; however, it seems to be a different bug. The suggested temporary solutions do not work in this case.
[Severe Performance issue rendering Emoji's on first run. #42586]
[iOS Crash when using COLR-v1+OT-SVG font and overlay #150765]
### Expected results
The screen that includes this font should build with reasonably good performance on both iOS and Android, even if the screen is slightly slower due to the font.
### Actual results
Screen transitions are extremely slow, and when scrolling through content with long text, there is noticeable lag.
Sometimes after repeating these actions, the app crashes unexpectedly.
All of these issues occur only on iOS devices.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: ElevatedButton(
onPressed: () {
Navigator.of(context).push(
MaterialPageRoute(
builder: (context) => const ParsedTextPage(),
),
);
},
child: const Text('Emoji Text'),
),
),
);
}
}
class ParsedTextPage extends StatelessWidget {
const ParsedTextPage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.grey,
appBar: AppBar(
title: const Text('Emoji Text'),
),
body: const Padding(
padding: EdgeInsets.symmetric(horizontal: 16.0),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text(
'Hello, World! ๐๐๐๐ช๐ซ๐',
style: TextStyle(fontFamily: 'NotoColorEmoji', fontSize: 24.0),
),
SizedBox(height: 16.0),
Text(
'Hello, World! ๐จโ๐ฉโ๐งโ๐ง๐จโ๐จโ๐ฆ๐จโ๐จโ๐ง๐จโ๐จโ๐งโ๐ฆ',
style: TextStyle(fontFamily: 'NotoColorEmoji', fontSize: 24.0),
),
SizedBox(height: 16.0),
Text(
'Hello, World! ๐คก๐คฅ๐ค๐๐ฟ๐น',
style: TextStyle(fontFamily: 'NotoColorEmoji', fontSize: 24.0),
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/0ccd1aed-176f-4f0b-9fe5-747d9fe8aae9
</details>
### Logs
<details open><summary>Logs</summary>
There are no unusual logs.
```console
Performing hot restart...
Syncing files to device iPhone 15 Pro Max...
Restarted application in 296ms.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[โ] Flutter (Channel stable, 3.24.0, on macOS 14.6.1 23G93 darwin-arm64, locale ko-KR)
โข Flutter version 3.24.0 on channel stable at /opt/homebrew/Caskroom/flutter/3.24.0/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 80c2e84975 (6 weeks ago), 2024-07-30 23:06:49 +0700
โข Engine revision b8800d88be
โข Dart version 3.5.0
โข DevTools version 2.37.2
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at /Users/shlee/Library/Android/sdk
โข Platform android-35, build-tools 35.0.0
โข ANDROID_HOME = /Users/shlee/Library/Android/sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] VS Code (version 1.92.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.94.0
[โ] Connected device (6 available)
โข sdk gphone64 arm64 (mobile) โข emulator-5554 โข android-arm64 โข Android 15 (API 35) (emulator)
โข iPhone SH (mobile) โข 00008110-000171D63622801E โข ios โข iOS 17.6.1 21G93
โข iPhone 15 Pro Max (mobile) โข 9E81C60E-73EA-4630-B526-EBF96C90497F โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.6.1 23G93 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 128.0.6613.120
! Device R5CX41CBPNY is not authorized.
You might need to check your device for an authorization dialog.
! Error: Browsing on the local area network for Shinhye์ iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| platform-ios,engine,c: performance,a: typography,perf: memory,has reproducible steps,P3,team-ios,triaged-ios,found in release: 3.24,found in release: 3.25 | low | Critical |
2,513,437,144 | vscode | Editor - ghost text hover toolbar not vertically aligned | 
| bug,inline-completions | low | Minor |
2,513,460,235 | pytorch | DISABLED test_serialized_patterns_up_to_date (__main__.TestPatternMatcher) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialized_patterns_up_to_date&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29852570385).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 9 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_serialized_patterns_up_to_date`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_pattern_matcher.py", line 1172, in test_serialized_patterns_up_to_date
pattern = gen_pattern(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1553, in gen_pattern
search_gm = trace_fn(search_fn, flat_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1834, in fwd_only
gm = make_fx(fn, decompositions, tracing_mode="real")(*args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2110, in wrapped
return make_fx_tracer.trace(f, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2048, in trace
return self._trace_inner(f, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2034, in _trace_inner
t = dispatch_trace(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1127, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1182, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/fx_passes/fuse_attention.py", line 620, in wrapper
return partial_func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/fx_passes/fuse_attention.py", line 475, in _sfdp_pattern_18
key = key.permute([0, 2, 1, 3])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1230, in __torch_function__
return func(*args, **kwargs)
RuntimeError: Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0 'self'
To execute this test, run the following from the base repo dir:
python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_serialized_patterns_up_to_date
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,513,468,983 | terminal | Please update building documentation | We followed the building section of README.md and doc/building.md to build the terminal, and found several uncleared guidance.
Following the guidance, we expect:
1. git submodule command initialize 3rd party codebase
2. OpenConsole.sln restore successfully when launching the CascadiaPackage with the documented setup
While the actual behavior is:
1. .gitmodule present nowhere, thus the git command does nothing
2. Several nuget error with "Unable to find", such as
```
NU1101 Unable to find package Microsoft.AspNetCore.App.Ref. No packages exist with this id in source(s): C:\Program Files\dotnet\library-packs, TerminalDependencies TerminalStress G:\GithubRepos\terminal\src\tools\TerminalStress\TerminalStress.csproj 1
NU1101 Unable to find package Microsoft.NETCore.App.Ref. No packages exist with this id in source(s): C:\Program Files\dotnet\library-packs, TerminalDependencies TerminalStress G:\GithubRepos\terminal\src\tools\TerminalStress\TerminalStress.csproj 1
NU1101 Unable to find package Microsoft.WindowsDesktop.App.Ref. No packages exist with this id in source(s): C:\Program Files\dotnet\library-packs, TerminalDependencies TerminalStress G:\GithubRepos\terminal\src\tools\TerminalStress\TerminalStress.csproj 1
```
3. Several nuget restore error with site pkgs.dev.azure.com/shine-oss, reason is 401, Unauthorized
Please update the building documentation | Issue-Docs,Area-Build,Product-Meta | low | Critical |
2,513,505,385 | flutter | Setting state while DraggableScrollableSheet is over-scrolled with BouncingScrollPhysics collapses the sheet | ### Steps to reproduce
1. Copy code sample below into `main.dart` of a new Flutter project.
2. Run it on an Android emulator or iOS simulator.
3. Like in the video below, over-scroll the bottom sheet at the top, and then trigger `setState` for the child content of the sheet by swiping the `PageView`.
4. To reproduce the issue more easily, replace `PageViewSheet` with `TimerSheet` at line 40, and just over-scroll the sheet.
### Expected results
Bottom sheet and scrollable positions to be preserved.
### Actual results
Bottom sheet collapses to its minimum size (`minChildSize` attribute).
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const App());
class App extends StatelessWidget {
const App({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: TestPage(),
);
}
}
class TestPage extends StatelessWidget {
const TestPage({super.key});
@override
Widget build(BuildContext context) {
return Material(
child: Stack(
children: [
DraggableScrollableSheet(
initialChildSize: 0.5,
minChildSize: 0.25,
maxChildSize: 1,
snap: true,
snapSizes: const [0.25, 0.5, 1],
snapAnimationDuration: const Duration(milliseconds: 200),
builder: (context, scrollController) {
return SafeArea(
bottom: false,
child: ColoredBox(
color: Colors.green,
child: SingleChildScrollView(
controller: scrollController,
padding: const EdgeInsets.symmetric(vertical: 64),
physics: const BouncingScrollPhysics(),
child: const PageViewSheet(), // TimerSheet(),
),
),
);
},
),
],
),
);
}
}
class PageViewSheet extends StatefulWidget {
const PageViewSheet({super.key});
@override
State<PageViewSheet> createState() => _PageViewSheetState();
}
class _PageViewSheetState extends State<PageViewSheet> {
final _pageController = PageController(viewportFraction: 0.2);
var _pageIndex = 0;
@override
void dispose() {
_pageController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Column(
children: [
Text('Current Page: $_pageIndex'),
const SizedBox(height: 32),
SizedBox(
height: 100,
child: PageView(
controller: _pageController,
onPageChanged: (value) {
setState(() => _pageIndex = value);
},
children: [
for (var i = 0; i < 10; i++)
Container(
color: i.isEven ? Colors.red : Colors.blue,
child: Center(
child: Text('Page $i'),
),
),
],
),
),
],
);
}
}
class TimerSheet extends StatefulWidget {
const TimerSheet({super.key});
@override
State<TimerSheet> createState() => _TimerSheetState();
}
class _TimerSheetState extends State<TimerSheet> {
var _seconds = 0;
@override
void initState() {
super.initState();
Future.doWhile(() async {
await Future.delayed(const Duration(seconds: 1));
if (!mounted) return false;
setState(() => _seconds++);
return true;
});
}
@override
Widget build(BuildContext context) {
return Column(
children: [
Text('Seconds: $_seconds'),
],
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[example 1.webm](https://github.com/user-attachments/assets/90d49eb3-9258-405b-bcb5-ef81cb7d2a1b)
[example 2.webm](https://github.com/user-attachments/assets/11be11ee-1c0d-4183-a837-eb46fc021094)
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.2, on macOS 14.6.1 23G93 darwin-arm64, locale en-TR)
โข Flutter version 3.24.2 on channel stable at /Users/rasitayaz/Library/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 4cf269e36d (6 days ago), 2024-09-03 14:30:00 -0700
โข Engine revision a6bd3f1de1
โข Dart version 3.5.2
โข DevTools version 2.37.2
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/rasitayaz/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2022.3)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[โ] VS Code (version 1.92.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.96.0
[โ] Connected device (6 available)
โข sdk gphone64 arm64 (mobile) โข emulator-5554 โข android-arm64 โข Android 14 (API 34) (emulator)
โข Raลitโs iPhone (mobile) โข 00008120-001A35E63E83C01E โข ios โข iOS 17.6.1 21G93
โข iPhone 15 Pro (mobile) โข 1EB73D3B-9A8A-413E-9A46-AF09906D2A28 โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-17-5
(simulator)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.6.1 23G93 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 118.0.5993.96
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| framework,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.25 | low | Major |
2,513,514,023 | PowerToys | Freezer icons on the Windows Desktop | ### Description of the new feature / enhancement
hi,
I work on a laptop, sometimes on the move with just the PC screen, sometimes with an additional monitor, and most often in the office with 3 other screens. I would like to be able to "freeze" on the Windows Desktop the icon shortcuts of my folders and documents in the place where I assign them, so that they do not change location all the time. The ideal would be to be able to freeze them as a whole: File A; folder B, folder C... Thank you in advance.
### Scenario when this would be used?
We waste a lot of time when we go from one environment to another to find our folders and icons. The objective is to gain efficiency and productivity.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,513,526,171 | next.js | Nextjs includes all client components in the bundle when at least one client component is rendered from a server component (app-folder) | ### Link to the code that reproduces this issue
https://github.com/EvgeniyKorshun/nextjs-includes-client-code-for-all-pages
### To Reproduce
1. Start the app in prod mode (npm run build && npm run start);
2. Open http://localhost:3000;
3. Check the Source of the page.
### Current vs. Expected behavior
**Current behavior:**
Next.js includes ClientComponent2 in the bundle, even though ServerComponent2 is not used on the current page.
**Expected behavior:**
Only ClientComponent1 should be included in the bundle.
<img width="955" alt="image" src="https://github.com/user-attachments/assets/a92ca925-33a9-45f9-a133-6e9eb63d8baf">
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.13.1
npm: 10.5.2
Yarn: 1.22.18
pnpm: 9.5.0
Relevant Packages:
next: 15.0.0-canary.146 // Latest available version is detected (15.0.0-canary.146).
eslint-config-next: N/A
react: 19.0.0-rc-7771d3a7-20240827
react-dom: 19.0.0-rc-7771d3a7-20240827
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
SWC, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
We are currently using the latest stable version of Next.js and it appears that it includes all client components in the bundle when at least one client component is rendered from a server component. (the same issue reproduced for latest canary version)
In our project we have a JAMStack architecture, it has a single catch-all route [[...slug]] that handles dynamic page generation by pulling content from a CMS. There is a file that imports multiple components in the following format (these components are not marked with use-client, and we have tried various import strategies, including conditional import, dynamic import, and lazy loading, but without any success):
```
const HeaderSection = dynamic(() => import('./components/HeaderSection'));
const TextSection = dynamic(() => import('./components/TextSection'));
const VideoSection = dynamic(() => import('./components/VideoSection'));
const TestSection = dynamic(() => import('./components/TestSection'));
const Components = {
HeaderSection,
TextSection,
VideoSection,
TestSection,
};
export const getComponent = (key) => Components[key]
```
Even when rendering a simple test page that only contains a basic block with no additional content, the entire bundle still includes all the use-client components from these sections. As a result, the page size grows excessively, often exceeding 1 MB. | bug,SWC,Webpack,linear: next | medium | Critical |
2,513,530,255 | kubernetes | PVC goes into Lost state | ### What happened?
Deploying a host path PVC results in Lost state.
1. User deployes below Persistent Volume.
```
kubectl get pv host-only-pv -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"host-only-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"1Gi"},"hostPath":{"path":"/mnt/tmp-host","type":"DirectoryOrCreate"},"persistentVolumeReclaimPolicy":"Retain"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2024-09-10T07:33:09Z"
finalizers:
- kubernetes.io/pv-protection
name: host-only-pv
resourceVersion: "353506815"
uid: 47ac2af2-ecd2-4ccd-8ee0-93b872980640
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: host-only-pvc
namespace: bkp-pvc
resourceVersion: "353506793"
uid: be600292-2d9e-46db-b0f3-df709741930b
hostPath:
path: /mnt/tmp-host
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
phase: Bound
```
2. User creates Persistent volume claim reffering to Persiste volume created in step 1 but this PVC immidately moves into `Lost` Status.
```
kubectl get pvc host-only-pvc -n bkp-pvc -oyaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"host-only-pvc","namespace":"bkp-pvc"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2024-09-10T07:33:09Z"
finalizers:
- kubernetes.io/pvc-protection
name: host-only-pvc
namespace: bkp-pvc
resourceVersion: "353508452"
uid: be600292-2d9e-46db-b0f3-df709741930b
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
volumeName: host-only-pv
status:
phase: Lost
```
### What did you expect to happen?
PVC should remain in bound state.
### How can we reproduce it (as minimally and precisely as possible)?
-
### Anything else we need to know?
I have attached kube-controller-managerlog here, where bunch of `Bound claim has lost its PersistentVolume. Data on the volume is lost!` error messages can be seen happening for other volumes as well.
[kube-controller-managerlogs.txt](https://github.com/user-attachments/files/16944022/kube-controller-managerlogs.txt)
### Kubernetes version
<details>
```console
$ kubectl version
# 1.27.0
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ uname -a
# Linux ISVBK8CL3T06 4.18.0-553.5.1.el8_10.x86_64 #1 SMP Tue May 21 03:13:04 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
CRI-O
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details> | kind/bug,sig/storage,lifecycle/stale,triage/needs-information,needs-triage | low | Critical |
2,513,615,385 | flutter | Keyboard Resets on emoji input on Android OPPO Find X5 Pro | ### Steps to reproduce
1. Create a simple app with TextField
2. Run app on OPPO Find X5 Pro
3. When inputting an emoji, observe how keyboard resets back annyoingly from emoji selection to alphabetical
4. The problem can be reproduced even with a stateful widget and adding a textediting controller
### Expected results
The keyboard should not reset back to normal state when inputting an emoji
### Actual results
The keyboard resets back on every emoji selection (even when deleting the emoji from the delete-icon in Emoji Selection Tab)
Furthermore, the blob that appears when selecting a text (long-pressing the textfield), that can be seen in the videos, is under the keyboard
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: KeyboardTest(),
);
}
}
class KeyboardTest extends StatelessWidget {
const KeyboardTest({
super.key,
});
@override
Widget build(BuildContext context) {
return Scaffold(
body: SafeArea(
child: Column(mainAxisAlignment: MainAxisAlignment.center, children: [
Padding(
padding: EdgeInsets.symmetric(horizontal: 16),
child: TextField(),
),
]),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/fad3c593-b931-46d0-92d3-de0cb912451b
https://github.com/user-attachments/assets/a75dd685-295e-43db-8d34-fa6f4d952993
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
<details open><summary>Doctor output</summary>
```console
I/ImeTracker( 2568): com.example.keyboard_emoji_input:765765a4: onCancelled at PHASE_CLIENT_ANIMATION_CANCEL
I/ImeTracker( 2568): com.example.app:a396bcb5: onCancelled at PHASE_CLIENT_APPLY_ANIMATION
3
W/VRI[MainActivity]( 2568): handleResized abandoned!
I/ImeTracker( 2568): com.example.app:8e6a31a4: onRequestShow at ORIGIN_CLIENT_SHOW_SOFT_INPUT reason SHOW_SOFT_INPUT
```
</details>
```
</details>
### Flutter Doctor output
[โ] Flutter (Channel stable, 3.24.1, on macOS 14.5 23F79 darwin-arm64, locale en-RO)
โข Flutter version 3.24.1 on channel stable at /Users/mihai/Programming/sdks/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 5874a72aa4 (3 weeks ago), 2024-08-20 16:46:00 -0500
โข Engine revision c9b9d5780d
โข Dart version 3.5.1
โข DevTools version 2.37.2
[โ] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
โข Android SDK at /Users/mihai/Library/Android/sdk
โข Platform android-34, build-tools 33.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b802.4-9586694)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2022.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b802.4-9586694)
[โ] VS Code (version 1.92.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.96.0
[โ] Connected device (4 available)
โข iPhone (mobile) โข 00008110-001229321EC1801E โข ios โข iOS 17.5.1 21F90
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.5 23F79
darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.5 23F79
darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome
128.0.6613.120
[โ] Network resources
โข All expected network resources are available.
โข No issues found! | a: text input,e: device-specific,platform-android,P2,team-android,triaged-android | low | Major |
2,513,671,338 | tensorflow | check failed: !PyErr_Occurred() when constructing two uint64 tensors | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When running the following code, tensorflow will directly raise program abort with the error message: `./tensorflow/python/eager/pywrap_tensor_conversion.h:58] Check failed: !PyErr_Occurred()`
```
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
lower1 = -1
try:
lower1 = tf.constant(lower1, dtype='uint64')
except:
...
lower2 = -2
lower2 = tf.constant(lower2, dtype='uint64')
```
It seems the problem occurs when TensorFlow tries to construct **two uint64 tensors**. Although it is invalid to convert negative int to unsigned, an exception is more proper as program abort will directly kill the process.
Indeed, only constructing one uint64 tensor will properly raises an OverFlow exception.
This issue only occurs when repeatedly constructing two uint64 tensors.
Another weird thing is that, **if I change the value of `lower2` to either `-1` or `-3` instead of `-2`**, this issue does not occur.
### Standalone code to reproduce the issue
```shell
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
lower1 = -1
try:
lower1 = tf.constant(lower1, dtype='uint64')
except:
...
lower2 = -2
lower2 = tf.constant(lower2, dtype='uint64')
```
```
### Relevant log output
```shell
F ./tensorflow/python/eager/pywrap_tensor_conversion.h:58] Check failed: !PyErr_Occurred()
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:eager,2.17 | medium | Critical |
2,513,719,704 | angular | Mocking Directive in standalone component test doesn't work | ### Which @angular/* package(s) are the source of the bug?
core (TestBed)
### Description
For a project I'm working on, we started to migrate to Standalone components, but we ran into issues on various parts with overriding various dependencies. We have mocked a lot of dependencies to not run their internals.
As an example, for one component that shows a button to copy text to clipboard, we use the CDK CopyToClipboard directive. However, we ran into problems overriding the directive in our test, which was still working fine in a non-standalone situation.
On the stackblitz I've created 2 variants of the test, as we originally used @ngneat/spectator for our tests, but even with testbed I wasn't able to make it work. In the test I try to see if the console logs from the mocked component are being done but for now it doesn't take.
The code as of now:
```
import { Directive, Input } from '@angular/core';
import { MatIconButton } from '@angular/material/button';
import { MatIcon } from '@angular/material/icon';
import { MockComponent } from 'ng-mocks';
import { CopyToClipboardComponent } from './copy-to-clipboard.component';
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { CdkCopyToClipboard } from '@angular/cdk/clipboard';
// simple mock directive to capture the input. We're not going to test the cdk logic of the copy
let clipboardResult = '';
@Directive({ selector: '[cdkCopyToClipboard]', standalone: true })
class MockCdkCopyToClipboard {
// text to copy to clipboard
constructor() {
console.log('directive mocked'); // Problem: I'm not seeing this in the console
}
@Input() set cdkCopyToClipboard(value: string) {
console.log('text copied', value); // Problem: I'm not seeing this in the console
clipboardResult = value;
}
}
describe('CopyToClipboardComponent - Testbed', () => {
let component: CopyToClipboardComponent;
let fixture: ComponentFixture<CopyToClipboardComponent>;
beforeEach(() => {
fixture = TestBed.configureTestingModule({
imports: [
MockComponent(MatIcon),
MockComponent(MatIconButton),
MockCdkCopyToClipboard,
],
})
/**
* suggested by https://stackoverflow.com/a/75243037/3222860
* but still doesn't show the console log from the mocked directive
*/
.overrideComponent(CopyToClipboardComponent, {
remove: { imports: [CdkCopyToClipboard] },
add: { imports: [MockCdkCopyToClipboard] },
})
.createComponent(CopyToClipboardComponent);
clipboardResult = '';
component = fixture.componentInstance;
fixture.detectChanges();
});
it('should create', () => {
expect(component).toBeTruthy();
});
it('should show the clipboard button when there is text to copy', () => {
component.textToCopy = 'test';
fixture.detectChanges();
expect(
fixture.nativeElement.querySelector('.clipboard-button')
).toBeTruthy();
expect(clipboardResult).toEqual('test');
});
// a few more tests
});
```
And the component we try to test is as follows:
```
import { CdkCopyToClipboard } from '@angular/cdk/clipboard';
import { Component, Input, OnChanges } from '@angular/core';
import { MatIconButton } from '@angular/material/button';
import { MatIcon } from '@angular/material/icon';
/**
* Show clipboard to copy text to clipboard.
* Example that I can't seem to fix when moving to standalone components.
*/
@Component({
selector: 'app-copy-to-clipboard',
template: `
@if (showButton) {
<button
class="clipboard-button mat-icon-button-small"
mat-icon-button
type="button"
[cdkCopyToClipboard]="textToCopy"
>
<mat-icon>content_copy</mat-icon>
</button>
}
`,
standalone: true,
imports: [
CdkCopyToClipboard, // I can't seem to override this dependency, its always using the real one
MatIcon,
MatIconButton,
],
})
export class CopyToClipboardComponent implements OnChanges {
@Input() textToCopy!: string;
showButton = false;
ngOnChanges(): void {
this.showButton = this.show();
console.log('showButton', this.showButton); // this is logging just fine
}
show() {
return (
!!this.textToCopy &&
this.textToCopy !== '-' &&
this.textToCopy?.toLowerCase() !== 'null'
);
}
}
```
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-um7c3g?file=src%2Fcomponents%2Fcopy-to-clipboard.component.testbed.spec.ts
And this is the directive I'm trying to mock:
https://github.com/angular/components/blob/main/src/cdk/clipboard/copy-to-clipboard.ts (which is also exported as a module but I don't think we use that right now with standalone. We did use it pre-migration)
### Please provide the exception or error you saw
```true
It doesn't log anything from the mocked directive. I would expect it to log the items in the constructor and the input to indicate that it is being used by the tests
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular 18.2.3
Karma 6.4.4
Jasmine 5.2.0
@ngneat Spectator 19 (wrapper around testbed with more functionality and performance improvements)
```
### Anything else?
I found it difficult to get the proper documentation for overriding as the current documentation focuses on providers rather than mocked components and directives. I often see suggestions to just use ngmocks and while it is valid, a simple mocked component or directive still makes it clearer what I want to test.
We also tried to move to Jest a few times but that also didn't work entirely (which might be related to this issue, I don't know), but we stuck with Karma/Jasmine for now. There's about 1000 tests in our project of which 73 don't work because of similar reasons to this one. It prevents us from moving to standalone components, which seems to be a bigger thing now that it has become the default in angular 19 (alpha). | area: testing,cross-cutting: standalone | low | Critical |
2,513,740,991 | bitcoin | Increasing self-hosted runner raw performance | _disclaimer_: The following should **not** replace us investigating and fixing the root causes of timeouts and intermittent test runtime performance.
Now seems an opportune time to open a discussion on some investigation I have been doing into our self-hosted runners, as our CI has been struggling again recently.
I wanted to see what the cost/benefit implications would be on upgrading our self-hosted runners would look like. Hooking up a single [Hetzner AX52](https://www.hetzner.com/dedicated-rootserver/ax52/) (70 โฌ/month) as a self-hosted runner saw each job run on average 3-5x faster (result image shown at end), which is not surprising in itself.
IIUC correctly we currently have about 25 low powered, shared vCPU x86_64 runners (plus a sprinkling of ARM ones). If we had the appetite, and could find the funding, we might consider:
1. upgrade the x86_64 runners to 12 dedicated CPU servers. At 70โฌ this would total 840โฌ per month, or 10080โฌ per year, vs current spend of ~3840โฌ, so 2.5x more cost for 3-5x more speed. This feels like a decent return.
alternatively
2. Bump our current (shared vCPU) runners to the next "level" up. If e.g. [these](https://github.com/maflcko/bitcoin-core-qa-assets/wiki/Persistent-workers#set-up-servers) are the runners in use for us today, we could increment the CPX21s to CPX31s, and the CPX31s to CPX41s for a monthly cost of 505.6โฌ vs a current spend of 320โฌ. I did not test performance gains of this path.
We could also" just spend more" buying larger numbers of the same (low-powered) runners, but IMO this would not be as effective as reducing the runtime of CI jobs, and eliminating CI run timeouts. Moving away from vCPUs feels like the correct choice, if we can, as it's possible that random contention on these could contribute to "random" timeouts and failures.
Additional thoughts in no particular order:
- I have likely not considered all the benefits of having larger numbers of lower powered runners. Comments welcomed on this.
- These more powerful runners also (all) come with (some) more disk space, so we could potentially do things like configure total ccache size (across all jobs) to be something like 100's of GB, and try and maximize those cache hits!
- I am not sure what the developer tradeoff is for "CI startup time" (i.e. when a job is picked up by a runner) vs "CI runtime".
- Shold we employ a more scientific approach, which could be to caclulate total compute/โฌ and just get whatever wins out by that metric.
I'd be curious to hear thoughts on whether this is something worth us looking at any further, or if folks have investigated this before and what they found.
#### AX52 test
1. A typical CI run using current runners (on a good day):

2. Jobs run using a single AX52 runner:

| Brainstorming | medium | Critical |
2,513,747,262 | TypeScript | Removing optional modifier in homomorphic mapped types does not work in generic contexts since 5.5.x | ### ๐ Search Terms
optional modifier, required fields, generic, NonNullable, strictNullChecks, homomorphic mapped types
### ๐ Version & Regression Information
- This changed between versions 5.4.5 and 5.5.2
- This changed in commit or PR e418f8d12c5f1b6c10fc3127764f34dad44d4586 (as reported by `every-ts`)
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.5.4#code/MYewdgzgLgBApgDwIYFsAOAbOMC8MA8AKvAlHGACYQzQBOAlmAOYB8AFAJQBcMSYAnrhYwA3gCgYMAPRSYAJTiYkwRkxgB3elAAWvATTRwVAM3rAaAVwBGAWij9DMEMZgADOqtcwUSANZxqHWx7Q2pQWlojWCQmJEYAOglpWRDsYjwPZgBuMSTUmABlEBQ4AHk0KHpwJAxcURgAbX9BRhhCAF0Afh4ARhgAXxzJPIdsAEEMDAUARwt6SIo6kUbmmFbm50LisoqqsBr2m26YPsHcyXyZuYWANRqLALqJqbhZ+bgKJrh+Teer94o7RySRkMAAwsUlIxqK0AKzxAAs8QQABoYFYLLBTGBsHD4vCEElQJBYAgeP9bvdHngLJQ4NiPucYJEoBZaGAYAgcoMgA
### ๐ป Code
```ts
const example = <T extends string>(): any => {
// Replacing with any specific sub-type of `string` makes the types correct again.
// type T = string;
type SomeOptional = { [key in T]?: 1 };
type AllRequired = { [key in keyof SomeOptional]-?: 1 };
type RequiredValues = AllRequired[keyof AllRequired];
// Complains in 5.4.x, but fine in 5.5.x
const x: RequiredValues = undefined
return x;
};
```
### ๐ Actual behavior
Using `-?` leaves the fields marked as optional, and taking the values of the resulting type gives a union with `undefined`.
### ๐ Expected behavior
Using `-?` makes all fields required, and taking the values of the resulting type gives a union of field types.
### Additional information about the issue
Requires a `--strictNullChecks` flag.
Replacing `keyof SomeOptional` with `NonNullable<keyof SomeOptional>` or `keyof SomeOptional as keyof SomeOptional` fixes the issue. It looks like this problem is limited to homomorphic mapped types.
Using `Required<{ [P in keyof SomeOptional]: 1; }>` in place of `{ [key in keyof SomeOptional]-?: 1 }` results in the same behaviour. | Bug,Fix Available | low | Minor |
2,513,761,695 | terminal | ColorTool.exe -c results in a misaligned chart | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.19045.4780
### Other Software
TCC v31.01.15 x64 command line
### Steps to reproduce
Run ```ColorTool.exe -c``` to display the color chart

### Expected Behavior
A properly-aligned table
### Actual Behavior
This [see image]:

It was possible to fix the alignment with ```sed```:

...but not in such a way that preserved the ANSI colors, which is the whole point of displaying this table.
I tried redirecting the output to a file so I could maybe fix it in postprocessing, but ```ColorTool.exe``` seems to detect if it is being redirected or piped, and the output has no color!! Perhaps ```ColorTool``` uses direct screen writes for changing colors, so they don't appear in a way that can be captured by redirect/pipe.

If this chart would align properly, I'd have never fallen into the ```sed``` {and ```perl```} holes in trying to fix this ๐ | Product-Colortool,Issue-Bug,Area-UserInterface | low | Minor |
2,513,769,879 | pytorch | Boolean indexing not working correctly for a numpy array indexed with a boolean pytorch tensor in the case of 2d singletons (shape of (1, 1)) | ### ๐ Describe the bug
I have a 2d numpy array I want to retrieve or replace values in the array according to bools in a torch tensor (as retrieved from a boolean expression like `x>5` for example).
For the case where I have a single element 2d numpy array (and therefore single element 2d boolean tensor) the boolean indexing does not work correctly:
```
import numpy as np
import torch
np_array = np.array([[1]])
t_index = torch.tensor([[False]])
print(np_array[t_index])
```
If working correctly this should print out an empty array like so: `array([])`. However, it seems to interpret the `False` as `True` and instead prints out `array([1])`. The same problem occurs when setting values based on a boolean tensor index:
```
np_array[t_index] = 0
print(np_array)
```
which should leave the array unchanged as `array([[1]])` but actually it does change the array and you instead get `array([[0]])`.
This problem does not occur if I swap the roles of numpy and torch and instead index a torch tensor with a numpy boolean array.
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Oct 26 2023, 18:07:37) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-1360P
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
BogoMIPS: 5222.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip gfni vaes vpclmulqdq rdpid fsrm md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @mruberry @rgommers @albanD | triaged,module: numpy,module: python frontend | low | Critical |
2,513,776,580 | PowerToys | workspaces show screen number on screen image at top of editor | ### Microsoft PowerToys version
0.84.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
create a workspace with 3 screens, I have a laptop connected to two additional monitors via a docking station. I also use duet and have a monitor on my ipad but have not really tested/used that with workspaces.
place several apps on the monitors and save the workspace.
launch the editor, you can see the screens at the top but there is no indication as to what screen number they are.

Additionally it would be more consistent if the screen numbers where the same as what is shown in system/display. (not a requirement just icing on the cake.

### โ๏ธ Expected Behavior
to see screen numbers under or above the images at the top of the editor
### โ Actual Behavior
no indication as to which image is which screen.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Minor |
2,513,831,146 | PowerToys | I hope you can add the auto-hide desktop icon function. | ### Description of the new feature / enhancement

### Scenario when this would be used?
It's all about improving the aesthetic customization of the system desktop
### Supporting information
 | Needs-Triage | low | Minor |
2,513,836,880 | tauri | [feat] window.alert api should work within iframes in mac | ### Describe the problem
`window.alert`, `prompt` and `confirm` APIs wont work in iframe in macos. It works in webkit in linux though.
### Describe the solution you'd like
The apis should work in all platforms as expected.
This was fixed in windows, wors as expected in Linux.
### Alternatives considered
_No response_
### Additional context
More details can be found here:
https://discord.com/channels/616186924390023171/1258736272814374964
from @FabianLars
> Hmm, it's probably just macos being stricter again as always. There are 3 obj-c callbacks (for alert [here](https://developer.apple.com/documentation/webkit/wkuidelegate/1537406-webview)) we should be able to use to overwrite the behavior / expose a callback to the devs.
> With the current semi feature freeze due to RC i doubt we can get it in for 2.0.0 but should be possible soon after.
| type: feature request,platform: macOS | low | Minor |
2,513,846,725 | PowerToys | PowerToys Workspaces: do not resize window more than it allows, if the window has size limits (KeePass 2) | ### Microsoft PowerToys version
0.84.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
Prerequisites:
- Have [KeePass 2](https://keepass.info/download.html) installed;
- Create/open and save password database with password protection;
- Create a PowerToys Workspace with KeePass window captured in it, save workspace;
- Close KeePass from system tray.
Reproduction steps:
1. Run Workspace with KeePass window included.
### โ๏ธ Expected Behavior
1. PowerToys Workspaces should not resize window if it does not allow this to use;
2. Such a window with size restrictions should be placed at center of app's window that supposed to be here.
<details><summary>Details</summary>
<p>

</p>
</details>
### โ Actual Behavior
1. Firstly, KeePass spawns password protection dialog box instead of the main window:
<details><summary>Details</summary>
<p>

</p>
</details>
2. See the window is resized despite the fact this dialog box does not allow user to do so;
3. After entering password, KeePass window appears normally.
### Other Software
- Windows 11 23H2;
- Three Full HD 27" monitors;
- KeePass app at the center one.
<details><summary>Details</summary>
<p>

</p>
</details> | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,513,871,982 | pytorch | No error raised when trying in-place operation on tensor of shape[0] > 1 (single memory location) | ### ๐ Describe the bug
The following code raises a RuntimeError, as expected (`RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.`)
```python
import torch
a = torch.randn((1, 2))
a -= a[:, :1]
```
However, if `a` has shape (2, 2), the error does not appear. This causes an issue, as the result is NOT what you could expect due to the in-place operation acting on the same memory location. This is shown by the following example :
```python
import torch
a = torch.randn((2, 2))
b = torch.clone(a)
a -= a[:, :1]
b = b - b[:, :1]
torch.allclose(a, b) # False
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-1011-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 7
BogoMIPS: 5000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.0
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.19.0
[pip3] torchviz==0.0.2
[pip3] triton==3.0.0
[conda] numpy 1.23.1 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet | module: cpu,module: error checking,triaged,module: advanced indexing,module: partial aliasing,module: edge cases | low | Critical |
2,513,898,773 | PowerToys | Issue with FancyZones screen assignment | ### Microsoft PowerToys version
0.84.0
### Installation method
GitHub, PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
I work from home part of my weeks, as well as in the office for the rest of the week. I have 2 different monitor setups in both locations. In the office I have 3 identical monitors (that I'm sure to plug into the same display outs from my computer every time), while at home I have a single super ultrawide.
I mention this because the issue that I'm having relates to the behavior that I see when switching back to my work setup after being at home. The layouts that I have assigned to a per-monitor basis always seem to either unassign themselves or assign the correct layouts to the incorrect screens.
I have other issues with my monitors (unrelated to PowerToys) that come up every time I come back to my work setup. Because of that I wouldn't think much of this issue here as well if it weren't for the fact that this used to work flawlessly before the update prior to this most current one.
Sorry I didn't add any screenshots, but I've already corrected the issue by reassigning the layouts, and anyone looking into this wouldn't know that the layouts were assigned to the wrong screens anyway.
### โ๏ธ Expected Behavior
I was expecting the layouts to assign themselves back to the monitors in which they were assigned before my move to and from home.
### โ Actual Behavior
The layouts either apply the correct layouts to the incorrect monitors, or are assigned some random layout. Typically they're the former behavior.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,513,982,924 | ui | [bug]: inferred type of 'MenubarMenu' cannot be named without a reference | ### Describe the bug
`next build` gives the following error:
```
Checking validity of types .../src/components/ui/menubar.tsx:13:7
Type error: The inferred type of 'MenubarMenu' cannot be named without a reference to '.pnpm/@radix-ui+react-context@1.1.0_react@19.0.0-rc-a03254bc-20240905_types-react@19.0.0-rc.0/node_modules/@radix-ui/react-context'. This is likely not portable. A type annotation is necessary.
11 | import { cn } from "@/lib/utils"
12 |
> 13 | const MenubarMenu = MenubarPrimitive.Menu
| ^
14 |
15 | const MenubarGroup = MenubarPrimitive.Group
16 |
```
### Affected component/components
MenuBar
### How to reproduce
1. execute 'next build'
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Checking validity of types .../src/components/ui/menubar.tsx:13:7
Type error: The inferred type of 'MenubarMenu' cannot be named without a reference to '.pnpm/@radix-ui+react-context@1.1.0_react@19.0.0-rc-a03254bc-20240905_types-react@19.0.0-rc.0/node_modules/@radix-ui/react-context'. This is likely not portable. A type annotation is necessary.
11 | import { cn } from "@/lib/utils"
12 |
> 13 | const MenubarMenu = MenubarPrimitive.Menu
| ^
14 |
15 | const MenubarGroup = MenubarPrimitive.Group
16 |
```
### System Info
```bash
Node.js v22.8.0
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Sat Jul 13 00:54:59 PDT 2024; root:xnu-11215.0.165.0.4~50/RELEASE_ARM64_T6031
Available memory (MB): 131072
Available CPU cores: 16
Binaries:
Node: 22.8.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.9.0
Relevant Packages:
next: 15.0.0-canary.146 // Latest available version is detected (15.0.0-canary.146).
eslint-config-next: N/A
react: 19.0.0-rc-a03254bc-20240905
react-dom: 19.0.0-rc-a03254bc-20240905
typescript: 5.5.4
Next.js Config:
output: N/A
```
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,514,011,053 | vscode | Notebook Diff Editor layout issues when cells have a horizontal scrollbar | When viewing diff of a notebook that contains cells with a horizontal scrollbar, the height of the editor calculated is incorrect.
We do not take the height of the scrollbar into account.
As a result the layout shifts causing poor ux
Q. Can we determine whether a scrollbar will be displayed
Q. Can we reserve a height for the editor even if scrollbar isn't displayed (resulting in the same height regardless of whether its displayed or not) | bug,notebook-diff | low | Minor |
2,514,018,738 | pytorch | Sympy bottleneck in eval_relation/assumptions getit | ### ๐ Describe the bug

Steps to reproduce:
1. Check out PyTorch and build at branch 'sympy-bottleneck-repro'
2. Check out torchrec at a reasonably recent version (I use caf0441a17d00eb326b15e1de48a72909e933b24)
3. Check out FBGEMM at https://github.com/pytorch/FBGEMM/pull/2967 or recent version (since my fix PR was landed). Build it in the directory `fbgemm_gpu`.
To repro the graph above, run:
```
time TORCH_COMPILE_CPROFILE=1 python torchrec/distributed/tests/test_pt2_multiprocess.py --num-features 300
```
You might also consider running without TORCH_COMPILE_CPROFILE=1 if you think cprofile is distorting the cost of lots of small Python calls.
### Versions
main
cc @chauhang @penguinwu @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka | triaged,oncall: pt2,module: dynamic shapes,module: startup-tracing-compile | low | Critical |
2,514,023,615 | godot | Controller name returned on macOS does not match other platforms | ### Tested versions
- Reproducible in 4.3. Introduced with commit [07313a08f41146e30005acfa784bdf005d23750b](https://github.com/godotengine/godot/commit/07313a08f41146e30005acfa784bdf005d23750b)
### System information
Godot v4.3.stable - macOS 14.6.1 - Vulkan (Forward+) - integrated Apple M1 Max - Apple M1 Max (10 Threads)
### Issue description
With the switch to using GCController, the controller name returned no longer matches other platforms. For instance, "PS5 Controller" becomes "DualSense Wireless Controller". This causes a problem when using this value in GDScript to determine controller icon sets across platforms: checking for an additional value on Mac doesn't seem to be a robust solution.
Unfortunately, there doesn't seem to be a way to associate an IOHIDDevice with GCController: so a robust code solution doesn't appear to be possible. SDL works around this by doing string compares and translating the GCController vendorId to the original ids (eg. 0x0ce6 (PS5 controller) 0x054c (Sony)). What I wonder is - should the same thing be done in Godot, or should the differences be handled in the game code?
_Some notes_:
This is an entry point to seeing how SDL handles this:
https://github.com/libsdl-org/SDL/blob/6e885d96193a4b0096fe7fed6d4e6c3e5f247283/src/joystick/apple/SDL_mfijoystick.m#L387
Here's a stack overflow detailing how you could tell if a IOHIDDevice would be handled by GCController, but I was unable to get the method mentioned to get a valid vendor or product id. I suspect the underlying API has changed, and this is not part of the exposed interface anyway - so even it worked, I suspect it might be risky from an app store review perspective.
https://stackoverflow.com/questions/33509296/supporting-both-gccontroller-and-iohiddeviceref
### Steps to reproduce
Use `var controller_name := Input.get_joy_name(device)`, and compare the results from other platforms with that on macOS.
### Minimal reproduction project (MRP)
It's really just the returned value from `Input.get_joy_name(device)`. | platform:macos,needs testing,topic:input | low | Major |
2,514,025,965 | terminal | Restoring exited panes should not restart them | # Description of the new feature/enhancement
* Have session persistence enabled
* Have a pane that has exited but wasn't closed, because the termination behavior is "never", etc.
* Restart Windows Terminal
* Suddenly all those tabs are running again
We should avoid restarting such tabs.
# Proposed technical implementation details (optional)
The state.json should store whether a pane has exited. If it had then it should not restart it automatically. | Area-TerminalControl,Product-Terminal,Issue-Task | low | Minor |
2,514,210,961 | ui | [feat]: Docs on Custom themes / components for CLI | ### Feature description
As mentioned in https://x.com/shadcn/status/1831771732690215413, you can have your own registry to distribute custom themes or components.
I have looked at the code and can reverse engineer how we might be able to do this ourselves, but it's not clear exactly what all is required for setup. I see that it's all JSON files in the registry which turn into code, but some things like https://ui.shadcn.com/r/styles/default/toast.json have ```import type {\n ToastActionElement,\n ToastProps,\n} from \"@/registry/default/ui/toast\"``` which that `registry/default` seems to be substituted at runtime.
Additionally, it's not clear what the side effects are of each ItemType (https://github.com/shadcn-ui/ui/blob/078dfe66072c4ca780bbc99d4ad4b13b1f44fe7e/apps/www/registry/schema.ts#L16-L26).
It would be nice to have a more curated set of documentation which explained how folks can use this to write their own registries and use the shadcn cli to distribute their own code. Simple examples like https://raw.githubusercontent.com/mindtown-ai/dynamic-prompt/main/schema.json make sense, but not when intra-registry components start to come in to play.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,514,215,867 | pytorch | inductor guards on 32 bit indexing even when generated code would be the same | I'm trying out some example compiled snippets to see what "32-bit indexing guards in the backward" will look like. One thing I noticed is that in this simple example:
```
import torch
class MyFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
def backward(ctx, grad_x):
return grad_x + grad_x
def f(x):
return MyFunc.apply(x)
f_compiled = torch.compile(f, dynamic=True)
x_ref = torch.randn(4, 5, device='cuda', dtype=torch.float16, requires_grad=True)
x_test = x_ref.clone().detach().requires_grad_()
out_ref = f(x_ref)
out_ref.sum().backward()
out_test = f(x_test)
out_test.sum().backward()
x_ref2 = torch.randn(47000, 47001, device='cuda', dtype=torch.float16, requires_grad=True)
x_test2 = x_ref2.clone().detach().requires_grad_()
out_ref2 = f(x_ref2)
out_ref2.sum().backward()
out_test2 = f_compiled(x_test2)
out_test2.sum().backward()
```
we generate 32 bit indexing guards [here](https://github.com/pytorch/pytorch/blob/c35b9535319b9ba0b0a0c759759bfce2ff387e01/torch/_inductor/codegen/simd.py#L1200) during inductor compilation. But when I re-run with inputs that invalidate the guards and check the generated inductor code, it looks identical.
Is there a reason we unconditionally generate these guards, and would it be possible to figure out how to only generate them when they would cause inductor to actually generate different code?
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Minor |
2,514,236,197 | kubernetes | feature: support the (X/Y) display mode for the printcolumn field in CR resource | ### What would you like to be added?
When I was working on implementing a CR controller using kubebuilder, I wanted to implement a way to display columns using a fraction format (i.e. X/Y).
```go
// +genclient
...
//+kubebuilder:printcolumn:name="Ready",type="string",JSONPath="{.status.readyReplicas}/{.spec.replicas}",description="Ratio of ready replicas to desired replicas."
//+kubebuilder:printcolumn:name="UP-TO-DATE",type="string",JSONPath=".status.updatedReplicas",description="Number of groups that have been updated (ready or not)."
//+kubebuilder:printcolumn:name="Age",JSONPath=".metadata.creationTimestamp",type=date,description="Age is the time LeaderWorkerSet was created."
// LeaderWorkerSet is the Schema for the leaderworkersets API
type LeaderWorkerSet struct {
...
}
```
`//+kubebuilder:printcolumn:name="Ready",type="string",JSONPath="{.status.readyReplicas}/{.spec.replicas}",description="Ratio of ready replicas to desired replicas."
`
I used the following method and found that an error would be reported. The error is as follows:
```bash
root@VM-0-8-ubuntu:/home/ubuntu/lws# make uninstall
make: go: Permission denied
test -s /home/ubuntu/lws/bin/controller-gen && /home/ubuntu/lws/bin/controller-gen --version | grep -q v0.16.2 || \
GOBIN=/home/ubuntu/lws/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.2
/home/ubuntu/lws/bin/controller-gen \
rbac:roleName=manager-role output:rbac:artifacts:config=config/rbac \
crd:generateEmbeddedObjectMeta=true output:crd:artifacts:config=config/crd/bases \
webhook output:webhook:artifacts:config=config/webhook \
paths="./..."
/home/ubuntu/lws/bin/kustomize build config/crd | kubectl delete --ignore-not-found=false -f -
customresourcedefinition.apiextensions.k8s.io "leaderworkersets.leaderworkerset.x-k8s.io" deleted
root@VM-0-8-ubuntu:/home/ubuntu/lws# make install
make: go: Permission denied
test -s /home/ubuntu/lws/bin/controller-gen && /home/ubuntu/lws/bin/controller-gen --version | grep -q v0.16.2 || \
GOBIN=/home/ubuntu/lws/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.2
/home/ubuntu/lws/bin/controller-gen \
rbac:roleName=manager-role output:rbac:artifacts:config=config/rbac \
crd:generateEmbeddedObjectMeta=true output:crd:artifacts:config=config/crd/bases \
webhook output:webhook:artifacts:config=config/webhook \
paths="./..."
/home/ubuntu/lws/bin/kustomize build config/crd | kubectl create -f -
The CustomResourceDefinition "leaderworkersets.leaderworkerset.x-k8s.io" is invalid: spec.additionalPrinterColumns[0].JSONPath: Invalid value: "{.status.readyReplicas}/{.spec.replicas}": must be a simple json path starting with .
make: *** [Makefile:187: install] Error 1
```
When I asked for help from the kubebuilder project, it seemed to ask me to submit an issue to controller-tool https://github.com/kubernetes-sigs/kubebuilder/discussions/4134#discussioncomment-10575562 , such as:
When I submitted an issue to controller-tool, it seemed to say that this was not a function that this project could provide. https://github.com/kubernetes-sigs/controller-tools/issues/1051#issuecomment-2337525823
I don't understand how it's generated. Therefore, I will submit an issue here for discussion (if it is not a function that can be provided by kubernetes/kubernetes, I will close this issue)
I don't know if my usage is wrong or if JsonPath itself only provides simple field parsing.
In addition,is there a way we could support a MultiJsonPath-like approach, allowing multiple JsonPaths to be combined? This would make the display more flexible, rather than being limited to specific JsonPaths only.
My expectation is 3/3. like this
```go
root@VM-0-8-ubuntu:/home/ubuntu/lws/config/samples# kubectl get lws
NAME READY UP-TO-DATE AGE
leaderworkerset-sample 3/3 3 6s
```
### Why is this needed?
- Make the display of CR resource objects in kubectl commands more flexible | sig/api-machinery,kind/feature,sig/cli,triage/accepted | low | Critical |
2,514,243,229 | flutter | [google_maps_flutter] `setAndGetScrollPosition` failing on iOS | I'm seeing persistent, but seemingly slightly flaky in how it manifests, failures in `setAndGetScrollPosition` in `webview_flutter` and `webview_flutter_wkwebview` in post-submit, closing the tree. It's showing up in [this commit](https://github.com/flutter/packages/pull/7599), but since presubmit past, while postsubmit failed four times in a row, it seems like maybe it's OOB rather than from the roll? Or it's flake and presubmit was just lucky.
I'm going to disable these test for now to re-open the tree. /cc @bparrishMines | team,platform-ios,p: maps,package,team-ecosystem,P2,c: disabled test,c: flake,triaged-ecosystem | low | Critical |
2,514,252,941 | next.js | `<details>` element behaves incorrectly in Next.js 14/15 | ### Link to the code that reproduces this issue
https://github.com/mikedidomizio/details-element-in-Next-14
### To Reproduce
First sorry that the GitHub link is for Next14, it doesn't matter here
The `<details>` HTML element is an accordion style HTML tag that can show and hide information.
The `onToggle` event is expected to automatically fire on render if open is set to true
[Documentation](https://developer.mozilla.org/en-US/docs/Web/API/HTMLDetailsElement/toggle_event)
> In the example above the event listener will be called once without any user interaction because the open attribute is set.
Below are CodeSandbox examples, the way to see if it automatically fire is to open the developer tools console and see that a console.log is either done or not.
In [HTML](https://codesandbox.io/p/sandbox/details-element-in-html-54nckm?file=%2Findex.html) it works that way โ
In [React 19](https://codesandbox.io/p/devbox/details-element-in-react-19-wzd6ln) it works that way โ
In [React 18](https://codesandbox.io/p/sandbox/details-element-in-react-18-jkx5pf?file=%2Fsrc%2FApp.tsx%3A6%2C16) it works that way โ
In [Next.js 15](https://codesandbox.io/p/devbox/details-element-in-next-15-react-19-cyshxf?file=%2Fapp%2Fpage.tsx%3A8%2C16) it doesn't seem to work that way ๐ค
In [Next.js 14](https://codesandbox.io/p/devbox/details-element-in-next-14-h9pqcc?file=%2Fapp%2Fpage.tsx%3A13%2C1) it doesn't seem to work that way ๐ค
So HTML/React will automatically trigger the `onToggle` event on render if `open` is set to `true`, but not Next.js.
Is this a bug or am I doing something wrong with the Next example?
(Originally posted on the [Discord](https://discord.com/channels/752553802359505017/1281336661887680653/1281336661887680653)/[Forum](https://nextjs-forum.com/post/1281336661887680653))
https://github.com/user-attachments/assets/f70b5f76-e31a-4072-8de8-35a6ace844f6
### Current vs. Expected behavior
Current behaviour:
- Unless my examples are incorrect it seems that Next.js behaves differently than others.
Expected behaviour:
- Consistent with others, onToggle event is fired immediately on mount if open is true. I'm a bit conflicted, it should probably behave the same even if I don't like the idea of it automatically being triggered.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 18.18.2
npm: 9.8.1
Yarn: 1.22.19
pnpm: 9.1.1
Relevant Packages:
next: 14.2.5 // There is a newer version (14.2.8) available, upgrade recommended!
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.3
Next.js Config:
output: export
โ There is a newer version (14.2.8) available, upgrade recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
I tested with building a production build with static export as well, same thing, no auto-fire. | bug | low | Critical |
2,514,278,947 | TypeScript | function members of primitives/builtins are not read-only | ### ๐ Search Terms
builtin number string member readonly
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about "Common Bugs that arent Bugs"
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.5.4&ssl=6&ssc=1&pln=1&pc=1#code/DYUwLgBAHhC8ECYDcAoKA6AbgQ2AVxAHkAzOCACgQEotcCTUVRIBPMgIgAsRhgB7dqhboAxp2wAnAIKR4wsZJmogA
### ๐ป Code
```ts
let x = 2;
x.valueOf = (2).valueOf;
let y = "hello";
y.charAt = y.charAt;
```
### ๐ Actual behavior
Code compiles just fine, even with `--strict` enabled.
Executing JavaScript fails:
```
can't assign to property "valueOf" on 2: not an object
```
### ๐ Expected behavior
Should be rejected by tsc as read-only assignment.
### Additional information about the issue
Seems similar in nature to https://github.com/microsoft/TypeScript/issues/49113
Notably, array.length is allowed (because its legal in JS) but string.length is not (because it is illegal in JS). Since assigning to builtin functions is also illegal, this should be flagged by tsc. | Suggestion,Awaiting More Feedback | low | Critical |
2,514,286,542 | neovim | Windows: log reports os_set_cloexec errors | ### Problem
Error message in `stdpath('state')/log`:
```text
ERR 2024-09-06T14:02:44.657 nvim.4691.0 os_set_cloexec:493: Failed to get flags on descriptor 3: Bad file descriptor
ERR 2024-09-08T17:28:53.165 nvim.7615.0 os_set_cloexec:493: Failed to get flags on descriptor 3: Bad file descriptor
ERR 2024-09-08T17:34:26.677 nvim.7727.0 os_set_cloexec:493: Failed to get flags on descriptor 3: Bad file descriptor
ERR 2024-09-08T20:50:49.236 nvim.15229.0 os_set_cloexec:493: Failed to get flags on descriptor 3: Bad file descriptor
ERR 2024-09-08T21:30:31.562 nvim.15321.0 os_set_cloexec:493: Failed to get flags on descriptor 3: Bad file descriptor
ERR 2024-09-09T00:42:07.227 nvim.15536.0 os_set_cloexec:493: Failed to get flags on descriptor 3: Bad file descriptor
```
### Steps to reproduce
not applicable
### Expected behavior
does not report error
### Neovim version (nvim -v)
0.10.1
### Vim (not Nvim) behaves the same?
not applicable
### Operating system/version
WSL2 Ubuntu 22.04
### Terminal name/version
Windows Terminal 1.22.2362.0
### $TERM environment variable
xterm-256color
### Installation
appimage | bug,platform:windows,complexity:low,system | low | Critical |
2,514,297,554 | vscode | Support for Inline Diff view in addition to Side by Side | This feature is requrest in [jupyterlab,](https://github.com/jupyterlab/jupyterlab-git/issues/1076) and I find it useful in jupyter for vscode as well, so I paste it here.
It would be great to be able to switch to an inline diff view in addition to the default side by side diff view.
Here is an example of this feature implemented in VS Code. Thanks!

| feature-request,notebook-diff | low | Minor |
2,514,313,839 | TypeScript | TypeScript 5.7 Iteration Plan | This document outlines our focused tasks for TypeScript 5.7. It minimally indicates intent to investigate tasks or contribute to an implementation. Nothing is set in stone, but we will strive to complete these tasks in a reasonable timeframe.
Date | Event
---------------|-------------------------
2024-09-09 | TypeScript 5.6 Release
2024-09-27 | Create 5.7 Beta (5.7.0) Build for Testing
2024-10-01 | **TypeScript 5.7 Beta Release**
2024-11-08 | Create 5.7 RC (5.7.1) Build for Testing
2024-11-12 | **TypeScript 5.7 RC Release**
2024-11-18 | Create 5.7 Final (5.7.2) Build for Testing
2024-11-21 | **TypeScript 5.7 Final Release** ๐
# Compiler and Language
* [Control Flow Analysis for Lambdas Passed to `immediate` Parameters](https://github.com/microsoft/TypeScript/pull/58729)
* [Enforce Readonly Checks on Object Members](https://github.com/microsoft/TypeScript/pull/59326)
* [Checks for Never-Initialized Variables](https://github.com/microsoft/TypeScript/pull/55887)
* [Disallow Parameter Property References from Class Fields](https://github.com/microsoft/TypeScript/pull/59623)
* [Investigate Relating Values to Conditional Return Types](https://github.com/microsoft/TypeScript/issues/33912)
* [Investigate Relative Import Extension Rewrites](https://github.com/microsoft/TypeScript/pull/59767)
* [Investigate `/** @typeArguments/specialize */`](https://github.com/microsoft/TypeScript/pull/59666)
* [Investigate Support for Sourcemap v4](https://github.com/microsoft/TypeScript/issues/46695)
* [`lib.d.ts` Updates](https://github.com/microsoft/TypeScript/issues/59704)
# Editor and Language Service
* [Investigate Expandable Quick Info/Hover Verbosity](https://github.com/microsoft/TypeScript/issues/59029)
* [Consult Root Files Before Opening `composite` Projects](https://github.com/microsoft/TypeScript/pull/59688)
* [Ancestor Configuration File Searching](https://github.com/microsoft/TypeScript/issues/56959)
* [Completions for `package.json` Subpath `imports`](https://github.com/microsoft/TypeScript/pull/57718)
* [Improved Rename for Shorthand Properties/Destructuring](https://github.com/microsoft/TypeScript/issues/58447)
* [Support "Prepare Paste Edits" Command](https://github.com/microsoft/TypeScript/issues/59881)
* [Ship Import-on-Paste in Stable VS Code](https://github.com/microsoft/vscode/issues/30066)
* [Improved Detection of `node:` for Auto-Import Paths](https://github.com/microsoft/TypeScript/pull/59702)
* [Investigate File Drop Support in Editor](https://github.com/microsoft/TypeScript/issues/50170)
* [Investigate Improved Move to File Naming](https://github.com/microsoft/TypeScript/issues/46514)
* [Investigate LSP Support](https://github.com/microsoft/TypeScript/issues/39459)
# Performance
* [Path Mapping Optimizations](https://github.com/microsoft/TypeScript/pull/59048)
* [Investigate Enabling V8 Compile Caching in Node.js](https://github.com/microsoft/TypeScript/pull/59720)
* [Investigate and Experiment with Full Monomorphization](https://github.com/microsoft/TypeScript/pull/58928)
# Website and Docs
* [Simplify and Refactor Website for Faster Builds](https://github.com/microsoft/TypeScript-Website/issues/2730)
* Handbook Review
* Experiment with Example-Driven Learning Paths
# Infrastructure
* [Experiment with Reduced Repros on Weekly New Error Runs](https://github.com/microsoft/typescript-error-deltas/issues/164)
* [Consider `--isolatedDeclarations`](https://github.com/microsoft/TypeScript/pull/59635)
* [Use `ProjectService` when Running TS ESLint](https://github.com/microsoft/TypeScript/pull/59645)
| Planning | high | Critical |
2,514,317,514 | ui | [bug]: Mixed indentations in the same file | ### Describe the bug
When you follow the instructions and install the library, `tailwind.config.{js,ts}` file you're getting at the end has both space and tab indentations in the same file, even on the same line. Which causes linters/checkers go crazy.
<img width="493" alt="image" src="https://github.com/user-attachments/assets/0389b1a2-f7fd-4a5a-82f1-cefdf8d3bdcf">
### Affected component/components
tailwind.config.{js,ts}
### How to reproduce
```bash
$ npx shadcn@latest init
npm warn exec The following package was not found and will be installed: shadcn@2.0.5
โ The path /Users/gokaygurcan/projects/test is does not contain a package.json file. Would you like to start a new Next.js project? โฆ yes
โ What is your project named? โฆ my-app
- Creating a new Next.js project. This may take a few minutes.
โ Creating a new Next.js project.
โ Which style would you like to use? โบ Default
โ Which color would you like to use as the base color? โบ Neutral
โ Would you like to use CSS variables for theming? โฆ no / yes
- Writing components.json.
โ Writing components.json.
- Checking registry.
โ Checking registry.
- Updating tailwind.config.ts
โ Updating tailwind.config.ts
- Updating app/globals.css
โ Updating app/globals.css
- Installing dependencies.
โ Installing dependencies.
- Updating files.
โ Created 1 file:
- lib/utils.ts
Success! Project initialization completed.
You may now add components.
```
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/3r49z4?file=%2Fmy-app%2Ftailwind.config.ts
### Logs
```bash
N/A
```
### System Info
```bash
OS: macOS 14.6.1 23G93 arm64
Shell: zsh 5.9
Terminal: Apple_Terminal
NPM: v10.8.2
Node: v20.17.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,514,322,908 | langchain | Error when extracting images with PyMuPDFLoader and PyPDFLoader | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
1. Use the following code to load a PDF with image extraction enabled with PyMuPDFLoader:
```python
########################################
# PyMuPDFLoader
########################################
from langchain_community.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader("google-2024-environmental-report.pdf", extract_images=True)
pages = loader.load()
for page in pages:
print(page.page_content)
```
2. Download the PDF located at: [Google 2024 Environmental Report](https://www.gstatic.com/gumdrop/sustainability/google-2024-environmental-report.pdf).
3. Additionally, I also tried using PyPDFLoader with the same PDF, and I encountered the same issue.
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/mnt/c/Users/BennisonJ/Yavar/projects/zypher-2.0/backend/apps/rag/main.py", line 811, in store_doc
data = loader.load()
^^^^^^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/langchain_community/document_loaders/pdf.py", line 387, in load
return list(self._lazy_load(**kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/langchain_community/document_loaders/pdf.py", line 384, in _lazy_load
yield from parser.lazy_parse(blob)
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/pdf.py", line 244, in lazy_parse
yield from [
^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/pdf.py", line 247, in <listcomp>
+ self._extract_images_from_page(doc, page),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/pdf.py", line 283, in _extract_images_from_page
return extract_from_images_with_rapidocr(imgs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/pdf.py", line 74, in extract_from_images_with_rapidocr
result, _ = ocr(img)
^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/rapidocr_onnxruntime/rapid_ocr_api.py", line 80, in __call__
dt_boxes, det_elapse = self.text_detector(img)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/rapidocr_onnxruntime/ch_ppocr_v3_det/text_detect.py", line 66, in __call__
data = transform(data, self.preprocess_op)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/rapidocr_onnxruntime/ch_ppocr_v3_det/utils.py", line 220, in transform
data = op(data)
^^^^^^^^
File "/home/bennison/miniconda3/envs/open-webui/lib/python3.11/site-packages/rapidocr_onnxruntime/ch_ppocr_v3_det/utils.py", line 75, in __call__
data['image'] = (img * self.scale - self.mean) / self.std
~~~~~~~~~~~~~~~~~^~~~~~~~~~~
ValueError: operands could not be broadcast together with shapes (896,800) (1,1,3)
```
### Description
I am encountering a ValueError when using both PyMuPDFLoader and PyPDFLoader to extract images from certain PDFs. The error message indicates that operands could not be broadcast together with shapes (896,800) (1,1,3). This occurs specifically when the extract_images parameter is set to True.
**Expected Behavior**
The code should successfully extract text and images from the PDF without errors.
**Additional Information**
This issue seems to occur with specific PDFs that may have unique formatting or image properties. I would appreciate any guidance on how to resolve this issue or if there are any workarounds available.
### System Info
```
langchain==0.1.16
langchain-chroma==0.1.0
langchain-community==0.0.34
langchain-core==0.1.52
langchain-text-splitters==0.0.2
PyMuPDF Version: 1.24.10
PyPDF Version: 4.2.0
Operating System: Ubuntu 22 LTS
``` | ๐ค:bug | low | Critical |
2,514,336,019 | tauri | [bug] Mouse Back/Forward buttons dont work on macOS | ### Describe the bug
This is related to https://github.com/tauri-apps/tauri/issues/4019
Special mouse buttons like back/forward are not working on macOS. They work fine on Windows and Linux
### Reproduction
Start a new app `npm create tauri-app@latest -- --rc` and add this:
```ts
window.addEventListener("mousedown", console.log)
```
### Expected behavior
It should log something on macOS, like it does on Windows and Linux, but it doesn't
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 14.6.1 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.78.0 (9b00956e5 2024-04-29)
โ cargo: 1.78.0 (54d8815d0 2024-03-26)
โ rustup: 1.27.0 (bbb9276d2 2024-03-08)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.4.0
- pnpm: 8.6.12
- yarn: 1.22.19
- npm: 10.8.1
- bun: 1.1.6
[-] Packages
- tauri ๐ฆ: 2.0.0-rc.10
- tauri-build ๐ฆ: 2.0.0-rc.9
- wry ๐ฆ: 0.43.1
- tao ๐ฆ: 0.30.0
- @tauri-apps/api ๎: 2.0.0-rc.4
- @tauri-apps/cli ๎: 2.0.0-rc.12
[-] Plugins
- tauri-plugin-dialog ๐ฆ: 2.0.0-rc.5
- @tauri-apps/plugin-dialog ๎: 2.0.0-rc.1
- tauri-plugin-log ๐ฆ: 2.0.0-rc.2
- @tauri-apps/plugin-log ๎: 2.0.0-rc.1
- tauri-plugin-os ๐ฆ: 2.0.0-rc.1
- @tauri-apps/plugin-os ๎: 2.0.0-rc.1
- tauri-plugin-fs ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-fs ๎: 2.0.0-rc.2
- tauri-plugin-shell ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-shell ๎: 2.0.0-rc.1
- tauri-plugin-updater ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-updater ๎: 2.0.0-rc.2
- tauri-plugin-window-state ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-window-state ๎: not installed!
[-] App
- build-type: bundle
- CSP: default-src 'self' data:; img-src 'self' data: https:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-eval';; connect-src ipc: http://ipc.localhost
- frontendDist: ../dist
- devUrl: http://localhost:5173/
- framework: SolidJS
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,514,377,793 | ui | [bug]: Getting no import alias found for Adonis Inertia Project(mistakenly recognize as Remix project) | ### Describe the bug
I'm encountering an error with the import alias configuration in my `tsconfig.json` file on an https://github.com/adonisjs/inertia-starter-kit project. The message shown is:
`No import alias found in your tsconfig.json file. Visit https://ui.shadcn.com/docs/installation/remix to learn how to set an import alias.`
Here is my current `tsconfig.json` file:
```json
{
"extends": "@adonisjs/tsconfig/tsconfig.app.json",
"compilerOptions": {
"rootDir": "./",
"outDir": "./build",
"baseUrl": "./",
"paths": {
"@/*": [ "./src/*" ] // Should always point to src
// "@/*": [ "./inertia/*" ] --> This doesn't work
}
},
"exclude": [
"./inertia/**/*",
"node_modules",
"build"
]
}
```
It seems like the issue could be related to this [lines](https://github.com/shadcn-ui/ui/blob/f4ca57a79cf2d56f9c55021242a55cf0e1018b72/packages/shadcn/src/utils/get-project-info.ts#L31) of code in
### Affected component/components
project initialization
### How to reproduce
1. npm init adonisjs@latest
2. setup tailwind as per docs
3. npx shadcn@latest init
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Ubuntu 22.04 node 22
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,514,389,212 | godot | Godot crash when trying to return project window (Linux, Nvidia driver 560 regression) | ### Tested versions
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
### System information
Godot v4.3.stable - Kubuntu 24.04.1 LTS 24.04 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 - Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz (8 Threads)
### Issue description
After opened any project couldn't return to projects list
### Steps to reproduce
1. Open godot.
2. Open any project.
3. Select Project -> Exit to projects list.
Get error in console:
```
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /lib/x86_64-linux-gnu/libc.so.6(+0x45320) [0x7e4ba2645320] (??:0)
-- END OF BACKTRACE --
================================================================
```
### Minimal reproduction project (MRP)
[bugproject.zip](https://github.com/user-attachments/files/16933423/bugproject.zip)
| bug,platform:linuxbsd,topic:rendering,needs testing,crash | low | Critical |
2,514,399,021 | tensorflow | Wheels have different metadata on different platforms | Hi! Some resolvers in Python such as poetry and uv try to create lockfiles from the user's requirements that work an any platform. For example, you could create a universal lockfile on linux and use it to install the project on windows.
For this, both poetry and uv read the `METADATA` file of a single wheel on the index (in this case, pypi) and assume its metadata applies to all other platforms, too. For tensorflow, there is currently different metadata for windows and for linux/mac. For windows, the `requires-dist` excluding the cuda packages is:
```
Requires-Dist: tensorflow-macos ==2.15.1 ; platform_system == "Darwin" and platform_machine == "arm64"
Requires-Dist: tensorflow-cpu-aws ==2.15.1 ; platform_system == "Linux" and (platform_machine == "arm64" or platform_machine == "aarch64")
Requires-Dist: tensorflow-intel ==2.15.1 ; platform_system == "Windows"
```
While for linux and mac it is:
```
Requires-Dist: absl-py (>=1.0.0)
Requires-Dist: astunparse (>=1.6.0)
Requires-Dist: flatbuffers (>=23.5.26)
Requires-Dist: gast (!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1)
Requires-Dist: google-pasta (>=0.1.1)
Requires-Dist: h5py (>=2.9.0)
Requires-Dist: libclang (>=13.0.0)
Requires-Dist: ml-dtypes (~=0.3.1)
Requires-Dist: numpy (<2.0.0,>=1.23.5)
Requires-Dist: opt-einsum (>=2.3.2)
Requires-Dist: packaging
Requires-Dist: protobuf (!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3)
Requires-Dist: setuptools
Requires-Dist: six (>=1.12.0)
Requires-Dist: termcolor (>=1.1.0)
Requires-Dist: typing-extensions (>=3.6.6)
Requires-Dist: wrapt (<1.15,>=1.11.0)
Requires-Dist: tensorflow-io-gcs-filesystem (>=0.23.1)
Requires-Dist: grpcio (<2.0,>=1.24.3)
Requires-Dist: tensorboard (<2.16,>=2.15)
Requires-Dist: tensorflow-estimator (<2.16,>=2.15.0)
Requires-Dist: keras (<2.16,>=2.15.0)
```
That means depending on whether we read a windows wheel or a unix wheel, we get a different lockfile.
Would it be possible for tensorflow to write the same METADATA for all platforms and gate the platform specific entries with `platform_system` markers?
For uv, we've considered reading the METADATA files for all wheels, but this has major drawbacks: We have to make 17 network requests for pypi instead of 1 for each version we try, slowing resolution down. There is also no perfect mapping between environment markers (which usually tell us which dependencies to install on which platform) and wheel tags, so when METADATA can be different between wheels we'd also have to capture this in lockfiles.
I hope I explained good enough why identical METADATA files across all wheels of a version are important for us, I can add more details about how the resolver works if you have more questions.
This is similar to the problem discussed at https://github.com/tensorflow/tensorflow/issues/62346#issuecomment-1798633528. | stat:awaiting tensorflower,type:build/install,subtype:cpu-intel,TF 2.15 | low | Major |
2,514,410,921 | vscode | Expose Walkthrough Media in Accessible view. | > @meganrogge for walkthroughs steps which include markdown, should we consider exposing the markdown content in the accessible view also?
_Originally posted by @bhavyaus in [#226642](https://github.com/microsoft/vscode/issues/226642#issuecomment-2332487868)_
For: Image and SVG, expose ALT text in the accessible view.
For Markdown, expose contents of markdown in accessible view. | feature-request,getting-started | low | Minor |
2,514,425,778 | kubernetes | Standardize a label to exclude Pods from Node drain | ### What would you like to be added?
I would like to have a standardized annotation to exclude Pods from Node drains.
This annotation should be implement by kubectl drain, but other tools like e.g. Cluster API would also be able to implement this annotation.
### Context
Today there is no standard way to exclude Pods from `kubectl drain`.
There is a set of Pods that are already excluded today:
* Pods belonging to an existing DaemonSet (orphaned DaemonSet Pods are evicted as well)
* Mirror Pods, i.e. Pods with the `kubernetes.io/config.mirror` annotation (usually static Pods managed by kubelet, like kube-apiserver)
So basically the best available option to exclude a Pod from drain would be adding the `kubernetes.io/config.mirror` annotation. But this is pretty bad (and also has unknown side effects).
Would it be possible to support an annotation like e.g. `node.kubernetes.io/exclude-from-drain`
| sig/node,kind/feature,sig/cli,sig/architecture,needs-triage | low | Major |
2,514,455,588 | vscode | Editor goes berserk and start scrolling on mouse movements | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0
- OS Version: Manjaro Linux
Steps to Reproduce:
1. Use VSCode for many many consecutive hours (I have no idea what triggers the issue)
Sometimes, randomly, the IDE seems to go completely berserk and it will start "following" the mouse cursor with scrolling.
Meaning that when I move the mouse cursor up, the editor will scroll up, when I move the mouse cursor down, the editor will scroll down. A dotted non-blinking caret is visible at all times at the end of the line at whose height the mouse cursor is; this is besides the normal blinking text cursor.
With this, the IDE becomes completely unusable.
Restarting stops this madness.
| bug,editor-scrollbar | low | Critical |
2,514,462,101 | ui | [bug]: new CLI default tailwind config | ### Describe the bug
Here for center the value has to be true in boolean but after new CLI installation it is adding inn string format like below.

### Affected component/components
tailwind.config.ts
### How to reproduce
pnpm dlx shadcn@latest init
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Arc browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,514,468,330 | vscode | Collapsible functions in merge editor views | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I think that it'd be helpful to be able to collapse functions in the merge editor views

In the same way that you can collapse functions in the regular editor window
<img width="586" alt="image" src="https://github.com/user-attachments/assets/2d193993-f9d5-43a5-b83a-264aad040dd6">
| feature-request,merge-editor | low | Minor |
2,514,473,376 | rust | Improve impl Trait 2024 capture rules compiler output | ### Code
```Rust
struct Foo;
impl Foo {
fn get(&self) -> impl Sized {
()
}
fn mutate(&mut self) {}
}
fn bar(_: impl Sized) {}
fn test() {
let mut foo = Foo;
let x = foo.get();
foo.mutate();
bar(x);
}
```
### Current output
```Shell
Compiling example v0.1.0 (/tmp/example)
error[E0502]: cannot borrow `foo` as mutable because it is also borrowed as immutable
--> src/main.rs:19:5
|
18 | let x = foo.get();
| --- immutable borrow occurs here
19 | foo.mutate();
| ^^^^^^^^^^^^ mutable borrow occurs here
20 | bar(x);
| - immutable borrow later used here
For more information about this error, try `rustc --explain E0502`.
error: could not compile `example` (bin "example") due to 1 previous error
```
### Desired output
I can't produce nice looking example, but to put it simply I would like a `note` section that would point to the definition of `Foo::get` and pointed that `impl Sized` implicitly captured `&self`. And another note could point out that if this is unwanted, then I could add `+ use<>` bound to disable capturing of this lifetime.
### Rationale and extra context
Firstly, if this is already tracked then I'm sorry for posting a duplicate. I don't know how to grep for that kind of issues.
I was playing with new impl Trait capture rules and have written an invalid example, that fails to compile. In function `Foo::get` returned opaque type `impl Sized` captures lifetime of `&self`. However compiler error is not as helpful, as it could be. It points to the error on the call side, but does not show an additional context on the declaration site.
I have no idea how easy or how hard this reasoning would be to implement, but this surely would be helpful information. Especially that this code compiles in Edition 2021.
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.83.0-nightly (9c01301c5 2024-09-05)
binary: rustc
commit-hash: 9c01301c52df5d2d7b6fe337707a74e011d68d6f
commit-date: 2024-09-05
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,514,492,810 | pytorch | BUG: `torch.cuda.is_available()` returns `False` in certain torch, CUDA and driver version | ### ๐ Describe the bug
Hi, I'm trying to create a Docker container with the following (**minimal reproducible**) CUDA `12.4.1` Dockerfile (host info: Driver Version: `550.107.02` CUDA Version: `12.4`):
```
FROM nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04
ARG DEBIAN_FRONTEND=noninteractive
# Install common tool & conda
RUN apt-get update && apt-get install -y \
software-properties-common \
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt install -y python3.10 \
&& rm -rf /var/lib/apt/lists/*
RUN apt update && \
apt install wget -y && \
apt install git -y && \
apt install curl -y && \
apt install vim -y && \
apt install bc && \
apt-get install net-tools -y && \
apt install ssh -y && \
wget --quiet https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh -O ~/anaconda.sh && \
/bin/bash ~/anaconda.sh -b -p /opt/conda && \
rm ~/anaconda.sh && \
mkdir -p /opt/conda/envs/finetune && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
# Workspace
WORKDIR /app
# Install conda finetune env
# COPY requirements.txt requirements.txt
RUN . /opt/conda/etc/profile.d/conda.sh && \
conda create --name finetune python=3.10 -y && \
conda activate finetune && \
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
# Cuda path
ENV CUDA_PATH=/usr/local/cuda
ENV LD_LIBRARY_PATH=$CUDA_PATH/lib64:$CUDA_PATH/compat:/usr/lib/x86_64-linux-gnu:$CUDA_PATH/targets/x86_64-linux/lib/stubs/:$LD_LIBRARY_PATH
ENV CUDNN_PATH=/usr/include
# Transformer engine path
ENV NVTE_FRAMEWORK=pytorch
# Copy workspace
COPY . .
# Enterpoint for bash shell
ENTRYPOINT ["/bin/bash"]
```
This just create a basic `nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04` image and install `conda` and `pip`. Then, I run the container with the following command:
```
docker run --runtime=nvidia -it --rm --gpus all --shm-size 64g --network=host --privileged --volume [USER_PATH]/.cache:/root/.cache --env NVIDIA_DISABLE_REQUIRE=1 username/imagename:tag
```
Then, inside the container, I install the latest stable torch (`2.4.1`) by:
```
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
```
After that, I run the simplest torch cuda test by:
```
python -c "import torch; print(torch.cuda.is_available())"
```
What I got is:
```
/opt/conda/envs/finetune/lib/python3.10/site-packages/torch/cuda/__init__.py:128: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
False
```
This is quite strange, since if I simlply turn to use the base image `nvidia/cuda:12.5.1-cudnn-devel-ubuntu22.04`, I can got the correct result that `torch.cuda.is_available()` returns `True`.
Any advice will be sincerely appreciated, thx!
### Versions
```
Collecting environment information...
/opt/conda/envs/finetune/lib/python3.10/site-packages/torch/cuda/__init__.py:128: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7302 16-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 5988.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.4.1+cu124
[pip3] torchaudio==2.4.1+cu124
[pip3] torchvision==0.19.1+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.4.1+cu124 pypi_0 pypi
[conda] torchaudio 2.4.1+cu124 pypi_0 pypi
[conda] torchvision 0.19.1+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim | module: binaries,module: cuda,triaged,module: docker | low | Critical |
2,514,503,301 | neovim | [remote ui] termguicolors not checked on first remote TUI attach | ### Problem
Starting a headless nvim instance, then attaching to it with `--remote-ui --server` bypasses the `termguicolors` check in `_defaults.lua`, causing the TUI to display with reduced colors unless `termguicolors` is manually set.
### Steps to reproduce
nvim --headless --listen /tmp/nvim.sock -u NORC
(switch to another terminal)
nvim --remote-ui --server /tmp/nvim.sock
### Expected behavior
Ideally, `termguicolors` would be handled per-TUI client, but this seems like a herculean effort for not much payoff.
However, I do think the *first* TUI to attach should maybe try to determine the initial value of `termguicolors`. Though that may run counter to the not that long ago decision to move the `termguicolors` check from Neovim core code to `_defaults.lua`.
This is very much a nitpick, so feel free to close with wontfix, though it might be good to track a list of remote-ui 'caveats' somewhere.
### Neovim version (nvim -v)
NVIM v0.11.0-dev-738+g2aa64df0d
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Arch Linux
### Terminal name/version
wezterm
### $TERM environment variable
xterm-256color
### Installation
Build from repo | bug,ui,channels-rpc,ui-extensibility,remote | low | Minor |
2,514,560,993 | next.js | Relay compiler configuration "exclude" option is not respected | ### Link to the code that reproduces this issue
https://github.com/mjfaga/nextjs-relay-swc-excludes
### To Reproduce
Install dependencies and build:
```shell
yarn install
yarn build
```
### Current vs. Expected behavior
# Current Behavior
When adding a library (in the MVP, `@stigg/react-sdk`) that uses GraphQL under the hood, the relay compiler breaks consumption of that library and the app no longer properly builds.
```
ยฑ yarn build
yarn run v1.22.21
$ relay-compiler --validate && next build
[INFO] Querying files to compile...
[INFO] [default] compiling...
[INFO] [default] compiled documents: 0 reader, 0 normalization, 0 operation text
[INFO] Compilation completed.
[INFO] Done.
โฒ Next.js 14.2.8
Creating an optimized production build ...
Failed to compile.
./node_modules/@stigg/js-client-sdk/dist/index.js
Module not found: Can't resolve '/Users/markfaga/projects/nextjs-relay-swc-excludes/./__generated__/SlimSubscriptionFragmentV2.graphql.ts'
https://nextjs.org/docs/messages/module-not-found
Import trace for requested module:
./node_modules/@stigg/react-sdk/dist/react-sdk.esm.js
./src/app/page.tsx
```
# Expected Behavior
No errors because node_modules is excluded from being targeted during relay GraphQL compilation (see below for additional context).
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:55:06 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6020
Available memory (MB): 98304
Available CPU cores: 12
Binaries:
Node: 22.7.0
npm: 10.8.2
Yarn: 1.22.21
pnpm: N/A
Relevant Packages:
next: 14.2.8 // Latest available version is detected (14.2.8).
eslint-config-next: 14.2.8
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
The current [SWC relay compiler implementation](https://github.com/vercel/next.js/blob/canary/turbopack/crates/turbopack-ecmascript-plugins/src/transform/relay.rs#L20-L24) only supports 3 key options:
* `src`
* `language`
* `artifact_directory`
`exclude` is another critical option that is not currently supported. In the direct relay implementation, this is optional BUT ships with a default value of `["**/node_modules/**", "**/mocks/**", "**/generated/**"]` to ensure things like packages in node_modules aren't targeted when they also happen to use GraphQL under the hood. In these cases, node_module source is recompiled, breaking those libraries. | bug,Turbopack | low | Critical |
2,514,577,539 | pytorch | RuntimeError: Modules with uninitialized parameters can't be used with `DistributedDataParallel` when using subclass of tensor. | ### ๐ Describe the bug
DDP init call is failing when using subclass of torch.Tensor, same code works with torch.Tensor.
Command to run the code
python test.py --max-gpus 2 --batch-size 512 --epoch 10
High Level overview of code example.
1) Define subclass of torch.tensor called MyTensor
2) Define subclass of torch.nn.Module called MyLinearLayer which uses tensor of type MyTensor for storing weight.
3) Instantiate a model call Net using MyLinearLayer.
4) Distribute the model Net across given number of GPU
```ddp_model = DDP(model, device_ids=[rank],find_unused_parameters=True,static_graph=True )```
5) During initialization following error is reported.
```
free_gpus=[1, 2]
<MyTensor([1., 2., 3.])>
Running DDP on rank 1 batch_size=512.
Running DDP on rank 0 batch_size=512.
W0909 10:36:55.065000 23054275613888 torch/multiprocessing/spawn.py:146] Terminating process 3930793 via signal SIGTERM
Traceback (most recent call last):
File "/prj/qct/lasgpu/users/ankushj/builds/osetml/3.py", line 205, in <module>
main()
File "/prj/qct/lasgpu/users/ankushj/builds/osetml/3.py", line 199, in main
mp.spawn(train, args=(world_size, args.batch_size,args.epoch, args.verbose), nprocs=world_size, join=True)
File "/prj/qct/lasgpu/users/ankushj/tools/miniconda3/envs/osetml/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 282, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method="spawn")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/prj/qct/lasgpu/users/ankushj/tools/miniconda3/envs/osetml/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 238, in start_processes
while not context.join():
^^^^^^^^^^^^^^
File "/prj/qct/lasgpu/users/ankushj/tools/miniconda3/envs/osetml/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 189, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/prj/qct/lasgpu/users/ankushj/tools/miniconda3/envs/osetml/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 76, in _wrap
fn(i, *args)
File "/prj/qct/lasgpu/users/ankushj/builds/osetml/3.py", line 126, in train
ddp_model = DDP(model, device_ids=[rank],find_unused_parameters=True,static_graph=False )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/prj/qct/lasgpu/users/ankushj/tools/miniconda3/envs/osetml/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 784, in __init__
self._log_and_throw(
File "/prj/qct/lasgpu/users/ankushj/tools/miniconda3/envs/osetml/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1127, in _log_and_throw
raise err_type(err_msg)
RuntimeError: Modules with uninitialized parameters can't be used with `DistributedDataParallel`. Run a dummy forward pass to correctly initialize the modules
```
6) Same code works if we replace MyTensor with native tensor. i.e
```
#weight = MyTensor.ones((in_features, out_features),**factory_kwargs)
weight = torch.ones((in_features, out_features),**factory_kwargs)
```
```
free_gpus=[1, 2]
<MyTensor([1., 2., 3.])>
Running DDP on rank 0 batch_size=512.
Running DDP on rank 1 batch_size=512.
[rank1]:[W909 10:38:32.530689521 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank0]:[W909 10:38:32.583329909 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
Training on GPU 0 took 2.22 seconds for 10 epochs for batch_size=512.
Training on GPU 1 took 2.27 seconds for 10 epochs for batch_size=512.
```
7) Underlying issue seems to be is related with call to
```Parameter(weight,requires_grad=True)```
When the weight is of type torch.tensor it removes the attribute torch.nn.parameter.UninitializedParameter however for subclass MyTensor said attribute persist even after explicit init.
This can be further demonstrated by following unit test case.
```
import torch
from torch.nn import Parameter
def check(tensor):
print("*"*20)
print(f'{type(tensor)=}\n{tensor=}')
param = Parameter(tensor)
assert not isinstance(param, torch.nn.parameter.UninitializedParameter)
assert type(param) is torch.nn.parameter.Parameter
# Define a custom tensor subclass
class MyTensor(torch.Tensor):
def __init__(self, *args, **kwargs):
super().__init__()
def custom_method(self):
return self * 2
size=(2,2)
tensor = torch.randn(size)
# Convert the tensor to a Parameter
check(tensor)
float_tensor = torch.FloatTensor(size).uniform_(-1, 1)
# Convert the FloatTensor to a Parameter
check(float_tensor)
# Create an instance of the custom tensor subclass
my_tensor = MyTensor(torch.randn(size))
check(my_tensor)
```
Output:
```
********************
type(tensor)=<class 'torch.Tensor'>
tensor=tensor([[-1.0664, -1.0035],
[ 0.5663, 0.8214]])
********************
type(tensor)=<class 'torch.Tensor'>
tensor=tensor([-0.6655, -0.6205])
********************
type(tensor)=<class '__main__.MyTensor'>
tensor=MyTensor([[-1.6115, -0.8384],
[ 0.0495, 0.1868]])
Traceback (most recent call last):
File "/prj/qct/lasgpu/users/ankushj/builds/osetml/a1.py", line 35, in <module>
check(my_tensor)
File "/prj/qct/lasgpu/users/ankushj/builds/osetml/a1.py", line 9, in check
assert not isinstance(param, torch.nn.parameter.UninitializedParameter)
AssertionError
```
Example code to reproduce the DDP init issue.
```
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.nn.parameter import Parameter, UninitializedParameter
from torch.utils.data import Dataset, DataLoader, DistributedSampler
from torchvision import datasets, transforms
import argparse
import os
import pynvml
import time
import torch
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
class MyTensor(torch.Tensor):
def __init__(self, *args, **kwargs):
super().__init__()
@staticmethod
def __new__(cls, *args, **kwargs):
return super().__new__(cls, *args, **kwargs)
@classmethod
def zeros(cls, *args, **kwargs):
return cls(torch.zeros(*args, **kwargs))
@classmethod
def ones(cls, *args, **kwargs):
return cls(torch.ones(*args, **kwargs))
@classmethod
def empty(cls, *args, **kwargs):
return cls(torch.empty(*args, **kwargs))
def __repr__(self, *, tensor_contents=None):
return f"<{super().__repr__()}>"
class MyLinearLayer(torch.nn.Module):
'''My linear nn.Module'''
def __init__(self, in_features, out_features,bias: bool = True, device=None, dtype=None):
'''Creates nn.Parameters for model and configures their initial value'''
factory_kwargs = {'device': device, 'dtype': dtype}
super(MyLinearLayer, self).__init__()
self.in_features = in_features
self.out_features = out_features
weight = MyTensor.ones((in_features, out_features),**factory_kwargs)
#weight = torch.ones((in_features, out_features),**factory_kwargs) #Native tensor works
weight = Parameter(weight,requires_grad=True)
if not isinstance(weight, torch.nn.parameter.UninitializedParameter):
print(f'Error : {isinstance(weight, torch.nn.parameter.UninitializedParameter)=} !\n{type(weight)=}\n{weight=}')
self.weight = weight
self.reset_parameters()
def forward(self, input):
'''Forward pass for simple linear model'''
y=torch.matmul(input,self.weight)
return y
def extra_repr(self):
#return f'MyLinearLayer : w={self.w} output_fmt=[{self.output_fmt}]'
buff=''
for n, p in self.named_parameters():
buff+=f'MyLinearLayer : {n=}:{p=} '
return buff
def reset_parameters(self) -> None:
self.weight.data.fill_(0)
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc = MyLinearLayer(2, 1 )
def forward(self, x):
x = self.fc(x)
return x
def get_free_gpus(threshold=0.20, max_gpus=None):
free_gpus = []
device_count = pynvml.nvmlDeviceGetCount()
for i in range(device_count):
handle = pynvml.nvmlDeviceGetHandleByIndex(i)
mem_info = pynvml.nvmlDeviceGetMemoryInfo(handle)
free_capacity = mem_info.free / mem_info.total
if free_capacity > threshold:
free_gpus.append(i)
if max_gpus and len(free_gpus) >= max_gpus:
break
print(f"{free_gpus=}")
return free_gpus
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12345'
torch.cuda.set_device(rank)
torch.distributed.init_process_group(backend='nccl', rank=rank, world_size=world_size)
def cleanup():
torch.distributed.destroy_process_group()
class SumDataset(Dataset):
def __init__(self, data_size):
self.data = [([v[0].item(), v[1].item()], v[0].item() + v[1].item()) for v in zip(torch.rand((data_size,)), torch.rand((data_size,)))]
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
inputs, target = self.data[idx]
return torch.tensor(inputs, dtype=torch.float32), torch.tensor(target, dtype=torch.float32)
def train(rank, world_size, batch_size, epoch, verbose=False):
print(f"Running DDP on rank {rank} {batch_size=}.")
setup(rank, world_size)
# Create model and move it to the corresponding GPU
model = Net().to(rank)
if verbose:
print(f'********** Model Description for {rank=} after dummy pass ********** ')
for name, param in model.named_parameters():
print(name, param.data)
pass
ddp_model = DDP(model, device_ids=[rank],find_unused_parameters=True,static_graph=False )
# Use a distributed sampler
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
data_size=2**15
train_dataset = SumDataset(data_size)
train_sampler = DistributedSampler(train_dataset, num_replicas=world_size, rank=rank)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, sampler=train_sampler)
# Loss and optimizer
criterion = nn.MSELoss().to(rank)
#optimizer = optim.SGD(ddp_model.parameters(), lr=0.12)
optimizer = optim.Adam(model.parameters(), lr=0.01)
# Training loop
start_time = time.time()
ddp_model.train()
count = 0
while count < epoch:
count_loss = 0.0
for data, target in (train_loader):
data, target = data.to(rank), target.to(rank).view(-1, 1)
optimizer.zero_grad()
output = ddp_model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
count_loss += loss.item()
loss=count_loss / len(train_loader)
if verbose:
print(f"Rank {rank}, epoch {count}/{epoch}, Loss: {loss:.4f} ")
count += 1
end_time = time.time()
cleanup()
print(f"Training on GPU {rank} took {end_time - start_time:.2f} seconds for {epoch} epochs for {batch_size=}.")
def main():
parser = argparse.ArgumentParser(description="Distributed Training Script")
parser.add_argument('--batch-size', type=int, default=32, help='Batch size for training')
parser.add_argument('--epoch', type=int, default=32, help='Epoch size for training')
parser.add_argument('--max-gpus', type=int, default=1, help='Maximum number of GPUs to use')
parser.add_argument('--verbose', action='store_true', help='Enable verbose output')
args = parser.parse_args()
args.batch_size=max(32,args.batch_size)
args.epoch=max(2,args.epoch)
args.max_gpus=max(1,args.max_gpus)
free_gpus = get_free_gpus(threshold=0.5, max_gpus=args.max_gpus)
if not free_gpus:
print("No GPUs with more than 50% free capacity found.")
return
os.environ['CUDA_VISIBLE_DEVICES'] = ','.join(map(str, free_gpus))
world_size = len(free_gpus)
# Example usage
tensor = MyTensor([1, 2, 3])
print(tensor) # tensor([1, 2, 3])
mp.spawn(train, args=(world_size, args.batch_size,args.epoch, args.verbose), nprocs=world_size, join=True)
torch._dynamo.config.traceable_tensor_subclasses=set([MyTensor])
if __name__ == "__main__":
# Initialize NVML
pynvml.nvmlInit()
main()
# Shutdown NVML
pynvml.nvmlShutdown()
```
### Versions
```
$ python collect_env.py
Collecting environment information...
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1499.983
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4500.04
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] onnx==1.15.0
[pip3] onnxruntime==1.18.1
[pip3] onnxsim==0.4.36
[pip3] torch==2.1.1
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.16.1
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] numpy-base 1.26.4 py311hf175353_0
[conda] pytorch 2.4.0 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.1.1 pypi_0 pypi
[conda] torchaudio 2.4.0 py311_cu121 pytorch
[conda] torchtriton 3.0.0 py311 pytorch
[conda] torchvision 0.16.1 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @albanD | oncall: distributed,tensor subclass | low | Critical |
2,514,609,406 | ollama | Talking to Mistral-Nemo via OpenAI tool calling - fails | ### What is the issue?
With this curl command:
```
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"mistral-nemo:12b-instruct-2407-fp16",
"messages": [
{
"role": "user",
"content": "What is the weather like in Boston?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}' | json_pp
```
we should be able to execute an OpenAI API compatible tool use call against `mistral-nemo`.
But I get this result:
```
{
"choices" : [
{
"finish_reason" : "stop",
"index" : 0,
"message" : {
"content" : "Glad to help! In which unit would you like the temperature?",
"role" : "assistant"
}
}
],
"created" : 1725905432,
"id" : "chatcmpl-677",
"model" : "mistral-nemo:12b-instruct-2407-fp16",
"object" : "chat.completion",
"system_fingerprint" : "fp_ollama",
"usage" : {
"completion_tokens" : 15,
"prompt_tokens" : 95,
"total_tokens" : 110
}
}
```
Is there a missing config or something similar like that?
Thanks.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.9 | bug,needs more info | low | Major |
2,514,634,364 | rust | Invalid suggestion for ambiguous associated type | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
[Playground link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=683af1bd2460e7a7f5aba5473f9490ba)
```rust
trait Parent {
type Assoc;
}
trait Child: Parent {
// send bound is not necessary for the error, but that's why this associated
// type exists at all
type Assoc: Send;
fn get_associated() -> <Self as Child>::Assoc;
}
pub struct Implementor;
pub struct AssociatedNamedFields {
field: i32,
}
impl Parent for Implementor {
type Assoc = AssociatedNamedFields;
}
impl Child for Implementor {
type Assoc = <Self as Parent>::Assoc;
fn get_associated() -> <Self as Child>::Assoc {
// this line produces the issue:
Self::Assoc { field: 1 }
// this is what rustc suggests, which is incorrect:
// <Self as Child>::Assoc { field: 1 }
}
}
```
I expected to see this happen:
`rustc` wouldn't suggest unstable features - either wouldn't suggest anything or suggest a different fix.
Not sure what the best fix in this case would be - maybe renaming one of the associated types? Or ideally not using it at all.
Instead, this happened:
`rustc` (and `rust-analyzer`) suggest the following replacement:
```rust
<Self as Child>::Assoc { field: 1 }
```
which is not supported syntax:
```
error[E0658]: usage of qualified paths in this context is experimental
--> src/main.rs:31:9
|
31 | <Self as Child>::Assoc { field: 1 }
| ^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #86935 <https://github.com/rust-lang/rust/issues/86935> for more information
```
Side note - interestingly, this only happens for structs with named fields - for tuple or unit structs, `rustc` just says:
```
error[E0599]: no associated item named `Assoc` found for struct `Implementor` in the current scope
--> src/main.rs:33:15
|
12 | pub struct Implementor;
| ---------------------- associated item `Assoc` not found for this struct
...
33 | Self::Assoc
| ^^^^^ associated item not found in `Implementor`
```
which is a confusing message by itself.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
I do see the same issue on nightly and beta. I also checked the last several versions and all of them had the same issue.
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
โ dev/rust/assoc (main) โ RUST_BACKTRACE=1 cargo build
Compiling assoc v0.1.0 (/home/erhodes/dev/rust/assoc)
error[E0221]: ambiguous associated type `Assoc` in bounds of `Self`
--> src/main.rs:30:28
|
2 | type Assoc;
| ---------- ambiguous `Assoc` from `Parent`
...
7 | type Assoc: Send;
| ---------------- ambiguous `Assoc` from `Child`
...
30 | fn get_associated() -> Self::Assoc {
| ^^^^^^^^^^^ ambiguous associated type `Assoc`
|
help: use fully-qualified syntax to disambiguate
|
30 | fn get_associated() -> <Self as Child>::Assoc {
| ~~~~~~~~~~~~~~~~~
help: use fully-qualified syntax to disambiguate
|
30 | fn get_associated() -> <Self as Parent>::Assoc {
| ~~~~~~~~~~~~~~~~~~
error[E0221]: ambiguous associated type `Assoc` in bounds of `Self`
--> src/main.rs:31:9
|
2 | type Assoc;
| ---------- ambiguous `Assoc` from `Parent`
...
7 | type Assoc: Send;
| ---------------- ambiguous `Assoc` from `Child`
...
31 | Self::Assoc { field: 1 }
| ^^^^^^^^^^^ ambiguous associated type `Assoc`
|
help: use fully-qualified syntax to disambiguate
|
31 | <Self as Child>::Assoc { field: 1 }
| ~~~~~~~~~~~~~~~~~
help: use fully-qualified syntax to disambiguate
|
31 | <Self as Parent>::Assoc { field: 1 }
| ~~~~~~~~~~~~~~~~~~
For more information about this error, try `rustc --explain E0221`.
error: could not compile `assoc` (bin "assoc") due to 2 previous errors
```
</p>
</details>
| A-diagnostics,A-associated-items,T-compiler,C-bug,D-incorrect | low | Critical |
2,514,637,487 | pytorch | add torch.linalg stubs | ### ๐ The feature, motivation and pitch
torch.linalg doesn't seem to have any stubs, just docstrings.
My IDE just marks everything as "Any", which is not very helpful.
Adding the interface for these functions would make my life much better.
### Alternatives
_No response_
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,module: linear algebra | low | Minor |
2,514,659,073 | deno | `npx <bin-name>` doesn't work after `deno install` on Windows | ```jsonc
// package.json
{
"dependencies": {
"typescript": "^5.6.2"
}
}
```
```shellsession
> deno install
> npx tsc --init
Need to install the following packages:
tsc@2.0.4
Ok to proceed? (y)
```
This was done in powershell. | bug,windows,install | low | Minor |
2,514,669,400 | TypeScript | Template Literal Types derived from the type keys cannot be used as Indexed Access Types in generic contexts | ### ๐ Search Terms
template literal, indexed access, keyof, key of
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about indexed access.
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.1-rc#code/PTAEBUE8AcFMGdQHcCWAXAFqAdgVwLawBOKAxqANayTwB0AUGjLKAIKgC8oA2gIwC6AbkbNQAIU6he3ISLigAwpIDePKpABcOAgCNi-Lb1ABfYU3kARFaAAMhk8Pog2AGxcB7JLAAmoFNm9YAA8fUABDUlIERHMEBljXDy9fLm5Wbht+ABo2bgByGzzs3Lx8PSJi9IADABJlUvLjKuKxDJb8wvaG-RzW2vrdYibihTac0YKi8e5uiun+2eGcizHQFcn+WWcLFHgwt09Q-0CQ3wio+BjmOjkWHb2D5Mk07ng0EmwAc3a3j+-p37+f5rV7vIHFFazCHcBaDIjDWS3UAAOXcaFY2AAkpdcLAADzgUDBNCwAKIdw6ABWsFIaAAfJJwDC6up3AAzCDDMyibHwXEEolBElk0AU6m0hlcJn9VkcwkAMlAAApAV9QAAfbRlYga0A6FCffxoXU6dzuFywMLYXV4Ny63ABWBs-w+ACUXKRCnc+GgETQvP5hOJpO85KpNPpjOZygAokFSC5cIE8bKIDl4JAyua6VygA
### ๐ป Code
```ts
// Types with numeric keys.
type A = [1];
type B = 1[];
type C = { [key: number]: 1 };
type D = { 0: 1 };
// Allowed indexed access types.
type Allowed = [A[0], A['0'], A[number], A[`${number}`], B[0], B['0'], B[number], B[`${number}`], C[0], C['0'], C[number], C[`${number}`], D[0], D['0']];
// Disallowed indexed access types.
type Disallowed = [A[string], B[string], C[string], D[string], D[number], D[`${number}`]];
type NotAnIssue<T extends object> = T[`${keyof T}`];
type Issue<T extends object> = T[`${keyof T & (string | number | bigint | boolean | null | undefined)}`];
type CompactIssue<T extends object> = T[`${Exclude<keyof T, symbol>}`];
```
### ๐ Actual behavior
The template type literal generated from `keyof T` cannot be used as an indexed access type for `T`.
### ๐ Expected behavior
The template type literal generated from `keyof T` or its restriction can be used as an indexed access type for `T`, since it is restricted to be a stringified version of the key type, which should behave exactly the same as the key type in this context.
### Additional information about the issue
All object types with numeric keys can be accessed with the string version of that key. However, the compiler does not acknowledge this fact in the case of generics with proper restrictions on the type.
Using `` `${keyof T}` `` directly is prohibited because template literal types cannot be constructed from `symbol` types. However, if the type is restricted to the allowed `string | number | bigint | boolean | null | undefined`, TypeScript is still refusing to use `keyof T` for indexed access.
The issue does not happen with definite (non-generic) types and the issue also does not occur if `` `${Exclude<keyof T, symbol>}` `` is replaced with `Exclude<keyof T, symbol>` (which is still a problem for other use cases). | Suggestion,Awaiting More Feedback | low | Minor |
2,514,671,779 | terminal | sixel unexpected background fill | ### Windows Terminal version
1.22.2362.0
### Windows build number
10.0.26040.0
### Other Software
My own sixel renderer that I just implemented. I don't doubt that it could be something I'm doing, but the symptom doesn't match my read on the sixel stream. This is a link to the function that does all of the sixel stream rendering:
https://github.com/JohnMcPMS/winget-cli/blob/f5f2a9497b561d2b724cbf754d89490b71a391c8/src/AppInstallerCLICore/Sixel.cpp#L156
### Steps to reproduce
This file contains a sixel stream that produces the issue: [sixel.txt](https://github.com/user-attachments/files/16935110/sixel.txt)
`cat sixel.txt` reproduces the issue.
### Expected Behavior
Only pixels within the bounds of the sixel would change.
### Actual Behavior
When I don't enable transparency in the sixel (send `0` for `P2` in the control string), some arbitrary number of cell height rows are filled with an arbitrary color. The conditions only seem to change based on the dimensions of the sixel image, not the color palette that it contains. This feels very much like an issue with reading uninitialized or unowned memory.

*Contains a selection behind the color band to indicate that it is on the sixel plane.* | Area-VT,Issue-Bug,Product-Terminal,In-PR | low | Major |
2,514,682,148 | flutter | SubmenuButton submenus close automatically only on first activation [Android] | ### Steps to reproduce
Create a SubmenuButton with a children menu items
```
SubmenuButton(
leadingIcon: Icon(Symbols.design_services),
menuChildren: [
MenuItemButton(
onPressed: () => print('Pressed'),
child: Text('My submenu item'),
),
],
child: Text('My submenu'),
),
```
Tap on the Submenu button, and the `My submenu item` will flash open and close immediately. The submenu appears on tap down, and disappears on tap release. This only happens the first time. Any subsequent activations work fine.
### Expected results
Tapping on the submenu will show the children and not disappear.
### Actual results
Tapping on the submenu shows and hides the submenu on tap release.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Center(
child: MenuAnchor(
builder: (context, controller, child) => TextButton(
onPressed: () {
controller.isOpen ? controller.close() : controller.open();
},
child: Text('Press here'),
),
menuChildren: [
SubmenuButton(menuChildren: [
MenuItemButton(
onPressed: () {},
child: Text('Menu item'),
),
], child: Text('Submenu')),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| platform-android,framework,f: material design,has reproducible steps,P1,customer: quake (g3),team-android,found in release: 3.24,found in release: 3.25 | medium | Critical |
2,514,683,971 | vscode | When opening from elevated commandline returns error code 19 | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0 System Setup
- OS Version: Windows 23H2
Steps to Reproduce:
1. Open VS Code using the command `code .` on an elevated cmd.exe
This error shows up on one machine, works fine on another machine. The issue exists even after reinstalling OS:
[Window Title]
Visual Studio Code
[Main Instruction]
The window terminated unexpectedly (reason: 'launch-failed', code: '18')
[Content]
We are sorry for the inconvenience. You can reopen the window to continue where you left off.
[ ] Don't restore editors [Reopen] [Close]
| freeze-slow-crash-leak,windows,electron,workbench-run-as-admin | low | Critical |
2,514,685,357 | go | net/http: TestClientWriteShutdown/h1 failures | ```
#!watchflakes
default <- pkg == "net/http" && test == "TestClientWriteShutdown/h1"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737264146446772337)):
=== RUN TestClientWriteShutdown/h1
=== PAUSE TestClientWriteShutdown/h1
=== CONT TestClientWriteShutdown/h1
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,514,685,401 | go | cmd/go: TestScript/telemetry failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/telemetry"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8737264145430408049)):
=== RUN TestScript/telemetry
=== PAUSE TestScript/telemetry
=== CONT TestScript/telemetry
script_test.go:135: 2024-09-09T14:45:39Z
script_test.go:137: $WORK=/opt/golang/swarm/.swarming/w/ir/x/t/cmd-go-test-3295567942/tmpdir3648051217/telemetry1615998762
script_test.go:159:
PATH=/opt/golang/swarm/.swarming/w/ir/x/t/cmd-go-test-3295567942/tmpdir3648051217/testbin:/opt/golang/swarm/.swarming/w/ir/x/w/goroot/bin:/opt/golang/swarm/.swarming/w/ir/x/w/goroot/bin:/opt/golang/swarm/.swarming/w/ir/x/w/goroot/bin:/opt/golang/swarm/.swarming/w/ir/cache/tools/bin:/opt/golang/swarm/.swarming/w/ir/bbagent_utility_packages:/opt/golang/swarm/.swarming/w/ir/bbagent_utility_packages/bin:/opt/golang/swarm/.swarming/w/ir/cipd_bin_packages:/opt/golang/swarm/.swarming/w/ir/cipd_bin_packages/bin:/opt/golang/swarm/.swarming/w/ir/cache/cipd_client:/opt/golang/swarm/.swarming/w/ir/cache/cipd_client/bin:/opt/golang/swarm/.swarming/cipd_cache/bin:/usr/sbin:/usr/bin
HOME=/no-home
CCACHE_DISABLE=1
GOARCH=amd64
...
[stderr]
go: GOTELEMETRYDIR cannot be modified
[exit status 1]
> stderr '^go: GOTELEMETRYDIR cannot be modified$'
matched: go: GOTELEMETRYDIR cannot be modified
# Test issue #69269: 'go telemetry off' should not increment counters.
# Establish that previous commands did write telemetry files. (0.000s)
> exists $userconfig/go/telemetry/local
script_test.go:159: FAIL: testdata/script/telemetry.txt:55: exists /opt/golang/swarm/.swarming/w/ir/x/t/cmd-go-test-3295567942/tmpdir3648051217/telemetry1615998762/userconfig/.config/go/telemetry/local: stat /opt/golang/swarm/.swarming/w/ir/x/t/cmd-go-test-3295567942/tmpdir3648051217/telemetry1615998762/userconfig/.config/go/telemetry/local: no such file or directory
--- FAIL: TestScript/telemetry (2.85s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,514,688,539 | langchain | Exception thrown when calling the invoke() method on chat object of class ChatSnowflakeCortex | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Issue reproduced using the Snowflake Cortex integration tutorial on the official documentation here : https://python.langchain.com/v0.2/docs/integrations/chat/snowflake/
during generation at the line `chat.invoke(messages)`
This is from the [langchain-community](https://pypi.org/project/langchain-community/) package and not the core langchain.
### Error Message and Stack Trace (if applicable)
`
...
File ~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:624, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[621](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:621) for i, m in enumerate(messages):
[622](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:622) try:
[623](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:623) results.append(
--> [624](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:624) self._generate_with_cache(
[625](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:625) m,
[626](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:626) stop=stop,
[627](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:627) run_manager=run_managers[i] if run_managers else None,
[628](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:628) **kwargs,
[629](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:629) )
[630](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:630) )
[631](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:631) except BaseException as e:
[632](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:632) if run_managers:
File ~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:846, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[844](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:844) else:
[845](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:845) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [846](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:846) result = self._generate(
[847](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:847) messages, stop=stop, run_manager=run_manager, **kwargs
[848](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:848) )
[849](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:849) else:
[850](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:850) result = self._generate(messages, stop=stop, **kwargs)
File ~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:220, in ChatSnowflakeCortex._generate(self, messages, stop, run_manager, **kwargs)
[218](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:218) l_rows = self._sp_session.sql(sql_stmt).collect()
[219](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:219) except Exception as e:
--> [220](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:220) raise ChatSnowflakeCortexError(
[221](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:221) f"Error while making request to Snowflake Cortex via Snowpark: {e}"
[222](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:222) )
[224](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:224) response = json.loads(l_rows[0]["LLM_RESPONSE"])
[225](https://file+.vscode-resource.vscode-cdn.net/Users/clakkad/Documents/Work/demos/Cortex/asking-questions-pdfs-snowflake-cortex/~/opt/anaconda3/envs/cortex_base/lib/python3.10/site-packages/langchain_community/chat_models/snowflake.py:225) ai_message_content = response["choices"][0]["messages"]
**ChatSnowflakeCortexError: Error while making request to Snowflake Cortex via Snowpark: 'NoneType' object has no attribute 'sql'`**
### Description
* Ideally the invoke() method should be able to return a completion as returned by Snowflake Cortex Complete function.
* The issue seems to be that "_sp_session" being used at invocation isnt set to the actual value grabbed during validation. Hence the sql() method supported on Snowpark session throws the above None error as by default _sp_session is set to None.
### System Info
langchain==0.2.16
langchain-community==0.2.16
langchain-core==0.2.38
langchain-text-splitters==0.2.4
platform: mac
python==3.10 | ๐ค:bug,investigate | low | Critical |
2,514,723,857 | kubernetes | Fix issue where alpha APIs that have `k8s:prerelease-lifecycle-gen:introduced` have an auto generated `APILifecycleRemoved` (should only be for beta/GA APIs) | ### What happened?
**EDIT: currently all the info in the issue here uses APIs generally but it should state alpha APIs as the current `APILifecycleRemoved` policy is being added to alpha APIs when it is unclear it should be and that is the root of the issue**
Currently when using the `// +k8s:prerelease-lifecycle-gen:introduced=<version>` tag on API types.go files, the associated generated code (`zz_generated.prerelease-lifecycle.go`) creates an APILifecycleRemoved method for that API automatically set for introduced + 6 minor versions. Specifically, there are a number of APIs that have // +k8s:prerelease-lifecycle-gen:introduced=1.26 and this means that these APIs have APILifecycleRemoved methods targeting v1.32. This means when attempting to bump `DefaultKubeBinaryVersion` from v1.31 -> v1.32 there are integration test failures where the test expects APIs to exist that were removed via the above flow (more details in other sections). Below is an example of such an entry from `master` @ HEAD:
File: ./staging/src/k8s.io/api/authentication/v1alpha1/zz_generated.prerelease-lifecycle.go
```
// APILifecycleRemoved is an autogenerated function, returning the release in which the API is no longer served as int versions of major and minor for comparison.
// It is controlled by "k8s:prerelease-lifecycle-gen:removed" tags in types.go or "k8s:prerelease-lifecycle-gen:deprecated" plus three minor.
func (in *SelfSubjectReview) APILifecycleRemoved() (major, minor int) {
return 1, 32
```
For the integration tests, I have validated it is what [folks mentioned in the PR comments](https://github.com/kubernetes/kubernetes/pull/126977#issuecomment-2332596429) - the usage of `// +k8s:prerelease-lifecycle-gen:introduced=1.26"`
It seems for all APIs where there is used there is a APILifecycleRemoved method added in all of the related generated code with 1.32 as the version to remove that API. This makes it so that the associated tests fail. If you manually change these values to 1.33, the tests all pass.
```
diff --git a/staging/src/k8s.io/api/authentication/v1alpha1/zz_generated.prerelease-lifecycle.go b/staging/src/k8s.io/api/authentication/v1alpha1/zz_generated.prerelease-lifecycle.go
index 62a70a781d1..af598cb4db1 100644
---
// APILifecycleRemoved is an autogenerated function, returning the release in which the API is no longer served as int versions of major and minor for comparison.
// It is controlled by "k8s:prerelease-lifecycle-gen:removed" tags in types.go or "k8s:prerelease-lifecycle-gen:deprecated" plus three minor.
func (in *SelfSubjectReview) APILifecycleRemoved() (major, minor int) {
- return 1, 32
+ return 1, 33
}
```
For further evidence this the root cause of the failing integration tests at https://github.com/kubernetes/kubernetes/pull/126977, below is the list of failing integration tests:
- TestGetsSelfAttributes
- TestCTBAttestPlugin
- TestCTBSignerNameFieldSelector
- TestCTBSignerNameChangeForbidden
- TestEncryptAll
- TestEtcdStoragePath
- TestAPIServerMetrics
and here are k8s types that APILifeCycleRemoved and APILifeCycleDeprecated set for 1.32:
APILifecycleDeprecated
- APIGroupDiscovery (from apidiscovery/v2beta1)
- APIGroupDiscoveryList (from apidiscovery/v2beta1)
- VolumeAttributesClass (from storage/v1alpha1)
- VolumeAttributesClassList (from storage/v1alpha1)
APILifecycleRemoved
- SelfSubjectReview (from authentication/v1alpha1)
- FlowSchema (from flowcontrol/v1beta3)
- FlowSchemaList (from flowcontrol/v1beta3)
- PriorityLevelConfiguration (from flowcontrol/v1beta3)
- PriorityLevelConfigurationList (from flowcontrol/v1beta3)
- ClusterTrustBundle (from certificates/v1alpha1)
- ClusterTrustBundleList (from certificates/v1alpha1)
See the gist here for the full code snippets showing the above methods:
https://gist.github.com/aaron-prindle/3e8a5c3cef3b8ff10763be0d4858254d
You can see that the k8s types above align 1:1 with the integration test failues stating that k8s objects don't exist.
In speaking offline w/ @jpbetz it seems a viable short-term solution to unblock PR https://github.com/kubernetes/kubernetes/pull/126977 is to make it so that `APILifecycleRemoved` & `APILifecycleDeprecated` is non-blocking and only warns. This would mean modifying logic in the two methods APILifecycleRemoved is currently called in:
- staging/src/k8s.io/apiserver/pkg/endpoints/deprecation/deprecation.go
- staging/src/k8s.io/apiserver/pkg/server/deleted_kinds.go
### What did you expect to happen?
I expected to be able to bump DefaultKubeBinaryVersion s/v1.31/v1.32 without any integration test failures.
### How can we reproduce it (as minimally and precisely as possible)?
Modify the code [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-base/version/base.go#L69) to bump DefaultKubeBinaryVersion and then run the kubernetes integration tests. For a quick validation, you can run: test/integration/auth/selfsubjectreview_test.go `TestGetsSelfAttributes` | kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,514,727,207 | pytorch | reduce torch.compile default logs | ### ๐ Describe the bug
Got these logs by default while running torch.compile: https://gist.github.com/jerryzh168/68f3a0b53908df9ae7e767cb20734eec while running this test: https://gist.github.com/jerryzh168/c2d4ce9c95d25b037a4c636a05f84fb7
### Versions
main
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | module: logging,triaged,oncall: pt2,module: inductor | low | Critical |
2,514,742,747 | neovim | Nread causes E676: No matching autocommands for buftype=acwrite buffer | ### Problem
Using `:Nread scp ..` twice causes the error "E676: No matching autocommands for buftype=acwrite buffer" when writing the file that was open before execuing Nread.
Seems that Nread/netrw is changing the other buffer's buftype?
### Steps to reproduce
I only know how to reproduce using scp/ssh. Assuming you already have an SSH server running:
`nvim --clean`
Open any random file. e.g. `:e ~/.gitconfig`
`:Nread scp://127.0.0.1//`
`C-^`
`:Nread scp://127.0.0.1//`
`C-^`
`:w`
### Expected behavior
Not getting an error
### Neovim version (nvim -v)
0.10.1
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
N/A Reproduced on two different distributions
### Terminal name/version
N/A
### $TERM environment variable
N/A
### Installation
Build from repo | bug,bug-vim,netrw | low | Critical |
2,514,748,742 | rust | Using `GetFileInformationByName` in Windows for file stat/metadata operations | Continuing discussion from https://github.com/rust-lang/rust/pull/128256...
Currently, window's fs.rs uses the `GetFileInformationByHandle` api which requires opening and closing a file handle. A new API will be available in future builds of Windows (from documentation it should be around 24H2/26052) called `GetFileInformationByName` which does not require opening a file handle. This reduces 2-3 syscalls in this code path which can have a reasonable performance gain.
There are a few design considerations however,
- Not all file-system drivers support this API, for example FAT32
- Backwards compatibility stops at 1709, i.e. (basically this feature would be `#[cfg(not(target_vendor = "win7")`] )
Currently, this change would support tier-1 support for https://github.com/rust-lang/rust/issues/121478 since it includes all the fields needed in one call. In addition to removing the additional syscall for handling reparse points.
## Prior Art:
- https://github.com/libuv/libuv/pull/4327
- https://github.com/python/cpython/commit/0f175766e27642108c65bba04bbd54dcf8799b0e
## Links:
- GetInformationByHandle - https://learn.microsoft.com/windows/win32/api/winbase/nf-winbase-getfileinformationbyname
- Blog post - https://blogs.windows.com/windows-insider/2024/02/08/announcing-windows-11-insider-preview-build-26052-canary-and-dev-channels/
cc: @ChrisDenton | I-slow,O-windows,T-libs,C-optimization,A-filesystem | low | Major |
2,514,822,682 | flutter | `flutter test -d` should give a warning/error | ### Use case
I often mistakenly run things like `flutter test -d chrome`[^1], with the expectation that my test platform will be Chrome. Instead of erroring or telling me what I did wrong, the tests will be executed with the default platform without a problem. I'll admit that this has tricked me into thinking that my tests passed on Chrome before...
[^1]: That's the correct flag for `flutter run`, but for `flutter test` it should be `--platform=chrome`.
### Proposal
Instead, if `flutter test` receives an invalid `-d` flag, it should output a suggestion to try the `--platform` flag. I'd also argue that it shouldn't run the tests at all in that scenario. | a: tests,c: new feature,tool,P2,team-tool,triaged-tool | low | Critical |
2,514,879,008 | PowerToys | Failed to load PowerToys.ImageResizerExt.dll | ### Microsoft PowerToys version
Failed to load PowerToys.ImageResizerExt.dll
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General, Always on Top, Command not found
### Steps to reproduce
Failed to load PowerToys.ImageResizerExt.dll
[PowerToysReport_2024-09-10-04-34-12.zip](https://github.com/user-attachments/files/16936381/PowerToysReport_2024-09-10-04-34-12.zip)
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,514,882,758 | go | x/tools/gopls: crash in Hover (telemetry) | ```
#!stacks
"runtime.sigpanic" && ("golang.hover:+170" || "golang.hover:+209")
```
This stack `zUGLQA` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-09-07.json):
```go
if def, ok := pkg.TypesInfo().Defs[ident]; ok && ident.Pos() == def.Pos() {
```
Looks like `Defs[ident]=nil` is an actual map entry.
`crash/crash`
[`runtime.gopanic:+69`](https://cs.opensource.google/go/x/go/+/go1.23.0:../../../../Users/adonovan/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.0.darwin-arm64/src/runtime/panic.go;l=804)
`runtime.panicmem:=262`
[`runtime.sigpanic:+19`](https://cs.opensource.google/go/x/go/+/go1.23.0:../../../../Users/adonovan/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.0.darwin-arm64/src/runtime/signal_unix.go;l=900)
[`golang.org/x/tools/gopls/internal/golang.hover:+170`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/golang/hover.go;l=304)
[`golang.org/x/tools/gopls/internal/golang.Hover:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/golang/hover.go;l=109)
[`golang.org/x/tools/gopls/internal/server.(*server).Hover:+30`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/server/hover.go;l=51)
[`golang.org/x/tools/gopls/internal/protocol.serverDispatch:+335`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/protocol/tsserver.go;l=503)
[`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.ServerHandler.func3:+5`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/protocol/protocol.go;l=160)
[`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.handshaker.func4:+52`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/lsprpc/lsprpc.go;l=509)
[`golang.org/x/tools/gopls/internal/protocol.Handlers.MustReplyHandler.func1:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:../../../../Users/adonovan/go/pkg/mod/golang.org/x/tools@v0.22.1-0.20240628205440-9c895dd76b34/internal/jsonrpc2/handler.go;l=35)
[`golang.org/x/tools/gopls/internal/protocol.Handlers.AsyncHandler.func2.2:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:../../../../Users/adonovan/go/pkg/mod/golang.org/x/tools@v0.22.1-0.20240628205440-9c895dd76b34/internal/jsonrpc2/handler.go;l=103)
`runtime.goexit:+0`
```
golang.org/x/tools/gopls@v0.16.1 go1.23.0 darwin/amd64 vscode (1)
```
Issue created by golang.org/x/tools/gopls/internal/telemetry/cmd/stacks.
Dups: ylB3Iw x2v5eg | NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,514,890,750 | PowerToys | Screen ruler no mouse cursor | ### Microsoft PowerToys version
0.84.1
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
Screen ruler
### Steps to reproduce
activate tool. select bounds and move to area press and hold moving to define area. works. press the other 3 options and move off of tool and no mouse cursor to move to area to be measured.
### โ๏ธ Expected Behavior
mouse cursor to allow visual location to point to be measured.
### โ Actual Behavior
mouse cursor disappears when off of tool UI. moving mouse back over tool makes cursor appear.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,514,923,561 | godot | Android editor: Game crash after changing device language | ### Tested versions
Godot Engine v4.3.stable.official [77dcf97d8]
### System information
Xiaomi Redmi A2+, Android 13, compatible rendering
### Issue description
After changing device language, opening the game will hang for a few seconds then exit.
This issue occur the same way on Android emulator x86
### Steps to reproduce
1. Open the game
2. Leave the game run in background and go to device system settings
3. Change device display language
4. Re-enter the game (the game will hang for a few seconds then exit)
### Minimal reproduction project (MRP)
[demo.zip](https://github.com/user-attachments/files/16936623/demo.zip)
| bug,platform:android,topic:editor,crash,regression | low | Critical |
2,514,965,741 | flutter | Garbage collector test for `menu_anchor` sometimes flakes | https://github.com/flutter/flutter/pull/154843 flakes while running `Mac framework_tests_impeller`, even though its not possible that any of the code in this PR affected this test suite.
The [specific failure](https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8737244374199615553/+/u/run_test.dart_for_framework_tests_shard_and_subshard_impeller/stdout) is as follows:
```txt
06:10 +7937 ~21: /Volumes/Work/s/w/ir/x/w/flutter/packages/flutter/test/material/menu_anchor_test.dart: Garbage collector destroys child _MenuAnchorState after parent is closed
โโโก EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The following TestFailure was thrown running a test:
Expected: null
Actual: _SubmenuButtonState:<_SubmenuButtonState#31f73(lifecycle state: defunct, not mounted)>
When the exception was thrown, this was the stack:
#4 main.<anonymous closure> (file:///Volumes/Work/s/w/ir/x/w/flutter/packages/flutter/test/material/menu_anchor_test.dart:4508:7)
<asynchronous suspension>
#5 testWidgets.<anonymous closure>.<anonymous closure> (package:flutter_test/src/widget_tester.dart:189:15)
<asynchronous suspension>
#6 TestWidgetsFlutterBinding._runTestBody (package:flutter_test/src/binding.dart:1032:5)
<asynchronous suspension>
<asynchronous suspension>
(elided one frame from package:stack_trace)
This was caught by the test expectation on the following line:
file:///Volumes/Work/s/w/ir/x/w/flutter/packages/flutter/test/material/menu_anchor_test.dart line 4508
The test description was:
Garbage collector destroys child _MenuAnchorState after parent is closed
```
This test was somewhat recently added in https://github.com/flutter/flutter/pull/149586.
The following diff is somewhat suspicious:
```diff
+ // Garbage collect. 1 should be enough, but 3 prevents flaky tests.
+ await tester.runAsync<void>(() async {
+ await forceGC(fullGcCycles: 3);
+ });
```
I suspect if we don't know why a single `forceGC` doesn't work, we can't guarantee `3` will work either. | P2,c: flake,team-design,triaged-design | low | Critical |
2,515,008,358 | pytorch | torch.nn.InstanceNorm2d and torch.nn.InstanceNorm3d returns nan with tensors of float16 dtype on cpu | ### ๐ Describe the bug
When I run this code on a tensor of type float16 of shape (1, 3, 256, 256) on torch.nn.InstanceNorm2d, I get a tensor with nan values from cpu but gpu can produce non-nan tensor. I am providing the tensor in a pickle file which can be downloaded from here: [link](https://drive.google.com/file/d/1-Qm6x7APyxGWSu92svOY3l_mBirlgsdp/view?usp=drive_link)
Minimal repro:
```python
import torch
import pickle
with open('input_instancenorm2d.pickle', 'rb') as f:
input_dict = pickle.load(f)
a = torch.tensor(input_dict['input'], dtype=torch.float16)
print(a.shape)
# torch.Size([1, 3, 256, 256])
print(a.dtype)
# torch.float16
num_features = 3
eps = -0.5
m = torch.nn.InstanceNorm2d(num_features=num_features, eps=eps)
output_cpu = m(a)
output_gpu = m.cuda()(a.cuda())
print(output_cpu[0][0][0][0])
# tensor(nan, dtype=torch.float16)
print(output_gpu[0][0][0][0])
# tensor(-1.0039, device='cuda:0', dtype=torch.float16)
```
It can also be reproduced on [colab](https://colab.research.google.com/drive/1X0AowednmpH79htWyiti3V8gG80pvpwq?usp=sharing). Just need to upload the pickle file to the files tab after connecting to a runtime.
Note: I found the tensor with fuzzing.
UPDATE: Also reproduced with `InstanceNorm3d` and a positive eps value. Repro (requires downloading [instancenorm3d.safetensors](https://drive.google.com/file/d/1w8cHlHnUuxQZd_4_rak18dLH6IQoCf6H/view?usp=sharing)):
```python
import torch
import numpy as np
from safetensors import safe_open
with safe_open("instancenorm3d.safetensors", framework="pt", device='cpu') as f:
input_tensor = f.get_tensor('input')
num_features = 16
eps = 0.2541456040680159
m = torch.nn.InstanceNorm3d(num_features=num_features, eps=eps)
out_cpu = m(input_tensor)
out_gpu = m.cuda()(input_tensor.cuda())
print(f"Nan elements in out_cpu: {torch.count_nonzero(torch.isnan(out_cpu)).item()}") # 5308416
print(f"Nan elements in out_gpu: {torch.count_nonzero(torch.isnan(out_gpu)).item()}") # 0
np.testing.assert_allclose(out_cpu.numpy(), out_gpu.cpu().numpy(), atol=100, equal_nan=True)
# AssertionError:
# Not equal to tolerance rtol=1e-07, atol=100
# x and y nan location mismatch:
# x: array([[[[nan, nan, nan, ..., nan, nan, nan],
# [nan, nan, nan, ..., nan, nan, nan],
# [nan, nan, nan, ..., nan, nan, nan],...
# y: array([[[[ 9.9121e-02, 1.9688e+00, 2.3315e-01, ..., 8.0615e-01,
# 1.1758e+00, 1.6028e-01],
# [-4.2139e-01, 1.1787e+00, 4.1565e-02, ..., 8.2568e-01,...
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 5333.317
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: nn,module: cpu,triaged | low | Critical |
2,515,043,008 | tauri | [bug][v2][android] Failed to create a symbolic link | ### Describe the bug
I try to create an Android app with Nuxt 3.13.1.
I'm on Windows 11, I followed all the Windows and Nuxt 3 prerequisites but when I launch the android dev command and I choose the "Medium_Phone_API_35" emulator I got this error :
```SHELL
Error Failed to create a symbolic link from
"E:\\Projects\\brain\\src-tauri\\target\\aarch64-linux-android\\debug\\libapp_lib.so"
to file
"E:\\Projects\\brain\\src-tauri\\gen/android\\app/src/main/jniLibs/arm64-v8a\\libapp_lib.so"
(file clobbering enabled): IO error: Incorrect function. (os error 1)
```
### Reproduction
1. Follow the v2 Windows prerequisites
2. Create a fresh new Nuxt 3 app using `npx nuxi@latest init app`
3. Instal Tauri CLI with `npm install -D @tauri-apps/cli@next`
4. Init Tauri `npx tauri init`
5. Follow the example configuration from https://v2.tauri.app/start/frontend/nuxt/
6. Try to start the android dev server with `npx tauri android dev`
7. Choose the `Medium_Phone_API_35` emulator
### Expected behavior
The Nuxt start page launched on the emulator, I guess
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
โ WebView2: 128.0.2739.67
โ MSVC: Visual Studio Build Tools 2022
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.12.2
- pnpm: 8.8.0
- yarn: 1.22.19
- npm: 10.7.0
[-] Packages
- tauri ๐ฆ: 2.0.0-rc.10
- tauri-build ๐ฆ: 2.0.0-rc.9
- wry ๐ฆ: 0.43.1
- tao ๐ฆ: 0.30.0
- @tauri-apps/api ๎: not installed!
- @tauri-apps/cli ๎: 2.0.0-rc.12
[-] Plugins
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:3000/
- framework: Vue.js (Nuxt)
- bundler: Webpack
```
### Stack trace
```text
Compiled plugins/server.mjs in 8622.28ms
Compiled plugins/client.mjs in 8675.9ms
Compiled types/plugins.d.ts in 9793.97ms
Vite client built in 575ms
Vite server built in 2030ms
warning: hard linking files in the incremental compilation cache failed. copying files instead. consider moving the cache directory to a file system which supports hard linking in session dir
?\E:\Projects\brain\src-tauri\target\x86_64-linux-android\debug\incremental\app_lib-2bc6ctgt3lhcg\s-gzqpludq51-062b9uj-working
warning: `app` (Lib) generated 1 warning
Finished `dev' profile [unoptimized + debuginfo] target(s) in 58.55s
Info symlinking lib "E:\\Projects\\brain\\src-tauri\\target\\x86_64-linux-android\\debug\\libapp_lib.so" in jniLibs dir "E:\\Projects\\brain\\src-tauri\\gen/android\\app/src/main/jniLibs/x86_64"
Nuxt Nitro server built in 27645 ms
Vite client warmed up in 1ms
Error Failed to create a symbolic link from "E:\\Projects\\\brain\\src-tauri\\target\\x86_64-linux-android\\debug\\libapp_lib.so" to file
"E:\\Projects\\brain\\src-tauri\\gen/android\\app/src/main/jniLibs/x86_64\\libapp_lib.so" (file clobbering enabled): I0 error: Incorrect function. (os error 1)
```
### Additional context
_No response_ | type: bug,scope: cli.rs,status: upstream,platform: Windows,platform: Android | low | Critical |
2,515,048,013 | tensorflow | How to pack TFRT into wheel? And use it in saved_model_cli. | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.16.1
### Current behavior?
I have been compiled whole TF successfully by commands. And there is a tfrt directory in ./bazel-bin/tensorflow/core.
But after I run build_pip_package, the tfrt package didn't show up at /tensorflow/core/ in wheel.
I want to use TFRT in serving and somewhere else.
### Standalone code to reproduce the issue
```shell
bazel build --config=release_cpu_linux --config=tf_public_cache --build_event_json_file=/tf/pkg/bep.json tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tf/pkg --cpu
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:build/install,subtype:bazel,2.17 | low | Critical |
2,515,052,491 | vscode | [Accessibility] Make Inline Edit accessible to screen reader users | Type: <b>Bug</b>
CC @meganrogge
Screen reader users cannot use "Inline Edit."
1. Activate screen reader
1. In any source editor, focus in a symbol
1. Run "Trigger Inline Edit, Control+Alt+="
## Current Behavior
Nothing is announced
## Expected Behavior
some actionable event needs to be triggered for screen readers.
VS Code version: Code - Insiders 1.94.0-insider (dc9412125d4e0a480593962ae2687e74e64af728, 2024-09-09T17:10:54.809Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1145G7 @ 2.60GHz (8 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.71GB (4.21GB free)|
|Process Argv|--crash-reporter-id b05b88e5-8894-4031-ae34-fa034ebddea9|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (125)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-openapi|42C|4.28.1
zotenote|A-W|1.0.1
android-dev-ext|ade|1.4.0
aiprm-lang|AIP|0.0.2
Bookmarks|ale|13.5.0
openscad|Ant|1.2.2
spellright|ban|3.0.136
mermaid-markdown-syntax-highlighting|bpr|1.6.0
external-pdf|cha|1.2.0
doxdocgen|csc|1.4.0
vscode-markdownlint|Dav|0.53.0
vscode-eslint|dba|3.0.10
vscode-quick-select|dba|0.2.9
vscode-deno|den|3.39.0
gitlens|eam|14.6.1
EditorConfig|Edi|0.16.4
prettier-vscode|esb|10.1.0
figma-vscode-extension|fig|0.3.5
vscode-firefox-debug|fir|2.9.10
shell-format|fox|7.2.5
vscode-google-translate|fun|1.4.13
codespaces|Git|1.17.2
copilot|Git|1.229.1095
copilot-chat|Git|0.21.2024090602
remotehub|Git|0.64.0
vscode-github-actions|git|0.26.2
vscode-pull-request-github|Git|0.97.2024090514
cloudcode|goo|2.17.0
overleaf-workshop|iam|0.13.2
cslpreview|igo|0.2.2
path-autocomplete|ion|1.25.0
latex-workshop|Jam|10.3.0
lilypond-syntax|jea|0.1.1
scheme|jea|0.2.0
better-cpp-syntax|jef|1.17.2
commitlint|jos|2.6.0
language-julia|jul|1.122.1
google-search|kam|0.0.1
vscode-lua-format|Koi|1.3.8
lilypond-formatter|lhl|0.2.3
lilypond-pdf-preview|lhl|0.2.8
lilypond-snippets|lhl|0.1.1
vslilypond|lhl|1.7.3
language-matlab|Mat|1.2.5
git-graph|mhu|1.30.0
azure-dev|ms-|0.8.3
vscode-azureappservice|ms-|0.25.3
vscode-azurecontainerapps|ms-|0.6.1
vscode-azurefunctions|ms-|1.15.3
vscode-azureresourcegroups|ms-|0.8.3
vscode-azurestaticwebapps|ms-|0.12.2
vscode-azurestorage|ms-|0.16.1
vscode-azurevirtualmachines|ms-|0.6.5
vscode-cosmosdb|ms-|0.22.0
vscode-docker|ms-|1.29.2
vscode-edge-devtools|ms-|2.1.5
black-formatter|ms-|2024.3.12071014
debugpy|ms-|2024.11.2024082901
flake8|ms-|2023.13.12291011
isort|ms-|2023.13.12321012
python|ms-|2024.15.2024090406
vscode-pylance|ms-|2024.9.1
jupyter|ms-|2024.9.2024090801
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.8
vscode-jupyter-slideshow|ms-|0.1.5
remote-containers|ms-|0.385.0
remote-ssh|ms-|0.115.2024090921
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.81.8
vscode-remote-extensionpack|ms-|0.25.0
azure-account|ms-|0.12.0
azure-repos|ms-|0.40.0
cmake-tools|ms-|1.19.51
cpptools|ms-|1.22.2
cpptools-extension-pack|ms-|1.3.0
js-debug-nightly|ms-|2024.9.517
remote-explorer|ms-|0.5.2024011009
remote-repositories|ms-|0.42.0
remote-server|ms-|1.6.2024011109
vscode-github-issue-notebooks|ms-|0.0.130
vscode-node-azure-pack|ms-|1.2.0
vscode-selfhost-test-provider|ms-|0.3.25
vscode-serial-monitor|ms-|0.12.0
vscode-speech|ms-|0.10.0
vscode-speech-language-pack-en-ca|ms-|0.4.0
vscode-speech-language-pack-en-gb|ms-|0.4.0
vscode-speech-language-pack-ko-kr|ms-|0.4.0
vsliveshare|ms-|1.0.5936
windows-ai-studio|ms-|0.5.2024090301
autodocstring|njp|0.6.1
pandocciter|not|0.10.4
typst-lsp|nva|0.13.0
publisher|pos|1.1.6
shiny|Pos|1.1.0
shinyuieditor|pos|0.5.0
quarto|qua|1.114.0
r-debugger|RDe|0.5.5
java|red|1.34.0
vscode-xml|red|0.27.1
vscode-yaml|red|1.14.0
r|REd|2.8.4
multi-command|ryu|1.6.0
AudioQ|Seh|0.0.2
vscode-deepl|soe|1.1.1
abc-music|sof|0.4.0
lua|sum|3.10.5
latex-utilities|tec|0.4.14
cmake|twx|0.0.17
vscode-terminal-here|Tyr|0.2.4
windows-terminal|Tyr|0.7.0
errorlens|use|3.16.0
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.2.30
vscode-conventional-commits|viv|1.26.0
vscode-arduino|vsc|0.7.1
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.0
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
markdown-all-in-one|yzh|3.6.2
grammarly|znc|0.25.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
01bff139:31013167
a69g1124:31018687
dvdeprecation:31040973
dwnewjupyter:31046869
impr_priority:31057980
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31119334
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-c:31125598
cf971741:31111988
jh802675:31132134
e80f6927:31120813
autoexpandse:31133494
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | feature-request,accessibility | low | Critical |
2,515,057,199 | deno | Memory leak with ky library | Tested with 2 versions :
```
deno 2.0.0-rc.1 (release candidate, release, x86_64-unknown-linux-gnu)
v8 12.9.202.5-rusty
typescript 5.5.2
---
deno 1.46.3 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.5-rusty
typescript 5.5.2
```
I am using `ky` library to request an URL with a bad status. (ky by default throws on http error)
In the first test, it throws the HTTP error as I expected.
But when I want to handle the error myself, the test report a memory leak. (same url, same ky function call)
Is this issue related to deno, or ky ?
Here is a minimal reproduction code :
```ts
import ky from "https://esm.sh/ky@1.7.2";
Deno.test("throws HTTPError : no memory leak", async () => {
// this test throws the expected error and does not leak memory
// HTTPError: Request failed with status code 404 Not Found: GET https://github.com/not_found
await ky.get("https://github.com/not_found").text();
});
Deno.test("memory leak when catching error", async () => {
// this test report a memory leak when catching the error
// the memory leak is fixed if we rethrow the error
// Leaks detected:
// - A fetch response body was created during the test, but not consumed during the test. Consume or close the response body `ReadableStream`, e.g `await resp.text()` or `await resp.body.cancel()`.
try {
await ky.get("https://github.com/not_found").text();
} catch (error) {
console.error("I handle the error here");
// fix the leak : rethrow the error
// throw error;
}
});
```
| needs investigation,node compat | low | Critical |
2,515,058,278 | svelte | "Unexpected token" parse error | ### Describe the bug
Svelte 5 fails to parse code that Svelte 4 handled:
```
$: jobDetails = <Partial<Record<JobName, JobDetails>>>{
[JobName.ThumbnailGeneration]: {
icon: mdiFileJpgBox,
title: $getJobName(JobName.ThumbnailGeneration),
subtitle: $t('admin.thumbnail_generation_job_description'),
},
[JobName.MetadataExtraction]: {
icon: mdiTable,
title: $getJobName(JobName.MetadataExtraction),
subtitle: $t('admin.metadata_extraction_job_description'),
},
[JobName.Library]: {
icon: mdiLibraryShelves,
title: $getJobName(JobName.Library),
subtitle: $t('admin.library_tasks_description'),
allText: $t('all').toUpperCase(),
missingText: $t('refresh').toUpperCase(),
},
[JobName.Sidecar]: {
title: $getJobName(JobName.Sidecar),
icon: mdiFileXmlBox,
subtitle: $t('admin.sidecar_job_description'),
allText: $t('sync').toUpperCase(),
missingText: $t('discover').toUpperCase(),
disabled: !$featureFlags.sidecar,
},
[JobName.SmartSearch]: {
icon: mdiImageSearch,
title: $getJobName(JobName.SmartSearch),
subtitle: $t('admin.smart_search_job_description'),
disabled: !$featureFlags.smartSearch,
},
[JobName.DuplicateDetection]: {
icon: mdiContentDuplicate,
title: $getJobName(JobName.DuplicateDetection),
subtitle: $t('admin.duplicate_detection_job_description'),
disabled: !$featureFlags.duplicateDetection,
},
[JobName.FaceDetection]: {
icon: mdiFaceRecognition,
title: $getJobName(JobName.FaceDetection),
subtitle: $t('admin.face_detection_description'),
handleCommand: handleConfirmCommand,
disabled: !$featureFlags.facialRecognition,
},
[JobName.FacialRecognition]: {
icon: mdiTagFaces,
title: $getJobName(JobName.FacialRecognition),
subtitle: $t('admin.facial_recognition_job_description'),
handleCommand: handleConfirmCommand,
disabled: !$featureFlags.facialRecognition,
},
[JobName.VideoConversion]: {
icon: mdiVideo,
title: $getJobName(JobName.VideoConversion),
subtitle: $t('admin.video_conversion_job_description'),
},
[JobName.StorageTemplateMigration]: {
icon: mdiFolderMove,
title: $getJobName(JobName.StorageTemplateMigration),
allowForceCommand: false,
description: StorageMigrationDescription,
},
[JobName.Migration]: {
icon: mdiFolderMove,
title: $getJobName(JobName.Migration),
subtitle: $t('admin.migration_job_description'),
allowForceCommand: false,
},
};
```
### Reproduction
https://github.com/immich-app/immich/blob/8cf33690b8ddd8e36bdf5d968c3d5700bfcc2949/web/src/lib/components/admin-page/jobs/jobs-panel.svelte#L60
### Logs
_No response_
### System Info
```shell
5.0.0-next.244
```
### Severity
annoyance | blocked by upstream | low | Critical |
2,515,075,054 | pytorch | Out of range value in target tensor succeed silently for CrossEntropyLoss in torch.compile | ### ๐ Describe the bug
When the target tensor contains out of range value (e.g. -1) which is also not equal to the ignore_index (which by default is -100):
- it causes CUDA error in eager mode
- it succeed silently in torch.compile
Repro: https://gist.github.com/shunting314/e85116335f3abc4b055fdb5d8a8a6596
### Error logs
_No response_
### Minified repro
_No response_
### Versions
.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Critical |
2,515,081,959 | vscode | Inline chat scrolls out of view when making a response at the top of a file | 1. Place your cursor at the top of a file / "select all"
2. Make a query that returns a response, e.g. buttons (as seen in https://github.com/microsoft/vscode/issues/228038)
3. Inline chat scrolls outside the editor:

| bug,inline-chat | low | Minor |
2,515,111,941 | svelte | Component not defined | ### Describe the bug
I was trying to use ts in my components with svelte 5 but it doesn't work and is says Button (my component) is not defined. When I stripped my project of ts, it was working.
### Reproduction
https://replit.com/@arashmahipal/Svelte-Bug?v=1
Code for page.svelte (doesn't show when I open the link)
<script>
import Dashboard from "$lib/Dashboard.svelte";
</script>
<Dashboard content={"Hello"} />
### Logs
_No response_
### System Info
```shell
System:
OS: macOS 14.2.1
CPU: (8) arm64 Apple M1
Memory: 145.47 MB / 8.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.16.0 - ~/.nvm/versions/node/v20.16.0/bin/node
npm: 10.8.3 - ~/.nvm/versions/node/v20.16.0/bin/npm
Browsers:
Safari: 17.2.1
npmPackages:
svelte: ^5.0.0-next.1 => 5.0.0-next.244
```
### Severity
blocking an upgrade | awaiting submitter | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.