id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,544,960,498 | tensorflow | Build Failure with ml_dtypes 0.4.0 on Power Architecture | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
master
### Custom code
No
### OS platform and distribution
linux/ppc64le
### Mobile device
_No response_
### Python version
3.9, 3.10, 3.11,3.12
### Bazel version
6.5.2
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When attempting to build TensorFlow on the Power architecture with ml-dtypes==0.4.0, the build fails due to an incompatibility with numpy 2.0.0rc1. On power, there is no wheel for ml_dtypes on pypi. Hence, Tensorflow build tries to build ml_dtypes from its source, which fails due to build time depencency on numpy 2.0.0rc1. https://github.com/jax-ml/ml_dtypes/blob/v0.4.0/pyproject.toml#L52. Numpy 2.0.0rc1 is not available for any architetcure on pypi.
We had rasied this concern with ml_dtypes and they have fixed it in ml_dtypes 0.4.1 version.
To resolve this issue, we recommend updating the following files:
1. tensorflow/tools/pip_package/setup.py
Update the ml_dtypes version requirement from 0.4.0 to 0.4.1.
Current section: ml_dtypes >= 0.4.0, < 0.5.0
2. ci/official/requirements_updater/requirements.in
Update the ml_dtypes version requirement from 0.4.0 to 0.4.1.
Current section: ml_dtypes >= 0.4.0, < 0.5.0
After these changes, it should reflect in the requirements_lock.txt for different Python versions.
### Standalone code to reproduce the issue
```shell
Architecture: ppc64le (Power)
Python Version: 3.x
TensorFlow Version: master
ml_dtypes Version: 0.4.0
pip Version: 24.2
Steps to reproduce:
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu
The error related to numpy==2.0.0rc1 will occur during the installation.
```
### Relevant log output
```shell
ERROR: /tensorflow/WORKSPACE:53:13: fetching whl_library rule //external:pypi_ml_dtypes: Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/rules_python/python/private/pypi/whl_library.bzl", line 294, column 35, in _whl_library_impl
repo_utils.execute_checked(
File "/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/rules_python/python/private/repo_utils.bzl", line 182, column 29, in _execute_checked
return _execute_internal(fail_on_error = True, *args, **kwargs)
File "/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/rules_python/python/private/repo_utils.bzl", line 123, column 13, in _execute_internal
fail((
Error in fail: repo.execute: whl_library.ResolveRequirement(pypi_ml_dtypes, ml-dtypes==0.4.0): end: failure:
command: /root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/python_ppc64le-unknown-linux-gnu/bin/python3 -m python.private.pypi.whl_installer.wheel_installer --requirement ml-dtypes==0.4.0 --isolated --extra_pip_args "{\"arg\":[]}" --pip_data_exclude "{\"arg\":[]}" --environment "{\"arg\":{}}"
return code: 1
working dir: <default: /root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi_ml_dtypes>
timeout: 600
environment:
PYTHONPATH="/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/rules_python:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__build:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__click:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__colorama:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__importlib_metadata:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__installer:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__more_itertools:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__packaging:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__pep517:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__pip:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__pip_tools:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__pyproject_hooks:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__setuptools:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__tomli:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__wheel:/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/pypi__zipp"
CPPFLAGS=""
===== stdout start =====
Collecting ml-dtypes==0.4.0 (from -r /tmp/tmpae6v73x5 (line 1))
Using cached ml_dtypes-0.4.0.tar.gz (692 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'error'
===== stdout end =====
===== stderr start =====
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [3 lines of output]
ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11
ERROR: Could not find a version that satisfies the requirement numpy==2.0.0rc1 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.25.0, 1.25.1, 1.25.2, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 2.0.0, 2.0.1, 2.0.2, 2.1.0rc1, 2.1.0, 2.1.1)
ERROR: No matching distribution found for numpy==2.0.0rc1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
```
| stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux | medium | Critical |
2,544,961,035 | flutter | GoRouter(Builder) FormatException on illegal route parameter | ### Steps to reproduce
- go_router: ^14.2.7
- go_router_builder: ^2.7.1
- Create a route with a typed field (e.g. int/double)
- run code generation for go_router_builder
- start web app
- navigate to the route but enter an unexpected type (e.g. String)
- FormatException is thrown and not handled (onException and errorBuilder/errorPageBuilder are not triggered)
### Expected results
onException, errorBuilder or errorPageBuilder are triggered
### Actual results
Crash of the whole application
### Code sample
<details open><summary>Code sample</summary>
Repo: https://github.com/YukiAttano/go_router_format_exception
```dart
// Route
class SomeRoute extends GoRouteData {
final int? id;
const SomeRoute(this.id);
@override
Widget build(BuildContext context, GoRouterState state) {
return Text(id.toString());
}
}
```
```dart
// Entry Route for Generation
@TypedShellRoute<MainRoute>(
routes: <TypedRoute<RouteData>>[
TypedGoRoute<SomeRoute>(path: "/some"),
],
)
class MainRoute extends ShellRouteData {
const MainRoute();
@override
Widget builder(BuildContext context, GoRouterState state, Widget navigator) {
return navigator;
}
}
```
```dart
// router
GoRouter(
routes: $appRoutes,
initialLocation: "/some",
redirect: (BuildContext context, GoRouterState state) {
if (state.fullPath == "/") return "/some";
}
)
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="512" alt="image" src="https://github.com/user-attachments/assets/8712d01b-9107-4c37-acc1-7d4efae33927">
https://github.com/user-attachments/assets/6405fbb7-4171-4ff7-b103-f69e943c7555
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[✓] Flutter (Channel stable, 3.24.2, on macOS 14.1 23B2073 darwin-arm64, locale de-DE)
• Flutter version 3.24.2 on channel stable at /Users/user/Library/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (3 weeks ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/user/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] Connected device (4 available)
• iPhone Small King (mobile) • 00008110-0002643136F2201E • ios • iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 14.1 23B2073 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.1 23B2073 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.58
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,platform-web,package,has reproducible steps,P2,p: go_router,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.26 | low | Critical |
2,545,004,720 | angular | Initializer APIs (input, output, queries) not integrating with `TestBed.override` in a jit & aot mix | Investigation result of: https://github.com/angular/angular/pull/57668
`TestBed.overrideComponent` seems to not work well with signal inputs, queries, output, or other non-decorator APIs.
That is because the `setClassMetadata` calls (generated for JIT) are used to re-compile the component with the overrides — **but** there is no decorator for the e.g. `input()` calls, hence this metadata is lost.
Options I could see:
- Smartly merging directly with the original component metadata in TestBed.
* Will be hard to detect output as it's non-distinguishable from decorator outputs.
* I guess, we could fully re-use inputs, queries etc. metadata instead of relying on prop decorators..?
- Adding synthetic prop decorators to `setClassMetadata`.
* Would be great to re-use the JIT transforms.. but those need an import manager; so needs a bit of refactoring!
**Note**: I do think this may not be a problem in Angular CLI applications because full tests are compiled with JIT transforms in place! Here it fails because we mix AOT targets with JIT targets; without any JIT transforms | area: compiler,core: queries,core: inputs / outputs,P3,compiler: jit,bug | low | Minor |
2,545,018,031 | ollama | Tesla p40 24G with quadro M6000 24G can not work together | ### What is the issue?
P40 with M6000, just P40 works, and M6000 memory not be used by ollama. even modified ollama.service for multi GPU.
I try to use P40 with 1080ti, works fine with default ollama.service. P40 with RTX 2060, works fine with default ollama.service.
anyone can tell me why and is there a chance to make them working together? Thx.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.11 | bug,nvidia | low | Major |
2,545,023,949 | PowerToys | Better German Hyphenation | ### Description of the new feature / enhancement
Some in-word linebreaks in the german UI are not what you would typically use in the german language.

German is notorious for having long words (Just look at "Donaudampfschifffahrtselektrizitätenhauptbetriebswerkbauunterbeamtengesellschaft") and to make these easier to read there are some guidelines on how to break words apart in case you cannot fit the word in one line.
In the picture provided above you can see that "Tastenkombinationsübersicht" was split after "ü" and the indicating "-" is not present.
Typically you would try to split between syllables preferably even entire sub words.
In this case splitting it like this:
```
Tastenkombinations
-übersicht
```
Would be prefered.
The same can be said for "Registrierungsvorschau"
Here it would be split like this:
```
Registrierungs-
vorschau
```
There are ready to use hyphenation libraries which can achieve this result and would make words with line breaks easier to read for your German users
### Scenario when this would be used?
This specific example can be seen every time Quick-Access is opened when the language is set to German.
There may also be more texts within PowerToys where a hyphenation library could be beneficial for improved readability for your German users.
### Supporting information
Here is a website that can be used to see possible places where German words can be split
https://www.silbentrennung24.de/ | Needs-Triage | low | Minor |
2,545,023,994 | ui | [bug]: Installation of shad cn ui packages | ### Describe the bug
I want to install one of the packages, I also use VPN and breaker, but I get this error.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
request to https://ui.shadcn.com/r/index.json failed, reason: connect ETIMEDOUT 5.9.210.65:443
### Affected component/components
ص
### How to reproduce
ص
### Codesandbox/StackBlitz link
ص
### Logs
```bash
ص
```
### System Info
```bash
د
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,545,065,637 | tauri | [bug] In Volume Mixer it show "Microsoft Egde WebView2" | ### Describe the bug
it show audio is playing from "Microsoft Egde WebView2" not apps

### Reproduction
play audio / video in app
### Expected behavior
it show app name & logo
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 X64
✔ WebView2: 128.0.2739.79
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.5.1
- npm: 9.8.0
[-] Packages
- tauri-cli [RUST]: 1.6.2
[-] App
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: upstream,platform: Windows | low | Critical |
2,545,091,032 | ant-design | 【Feature】ConfigProvider 的 padding、margin和borderRadius属性希望同时支持 string 类型 | ### What problem does this feature solve?
目前,Ant Design 的 ConfigProvider 中某些属性(如 padding、margin 和 borderRadius)只能接受 number 类型的值。这限制了开发者在不同方向上设置不同的值(例如 padding: "10px 20px 30px 40px")。
为了更灵活地控制组件的样式,希望这些属性能够同时支持字符串类型的值,以便可以像 CSS 一样自由地设置四个方向的值。
### What does the proposed API look like?
在ThemeConfig配置对象中,仅支持 number 类型的值可以同时支持 number 和 string。
```tsx
import React from "react";
import { ConfigProvider, Button } from "antd";
const themeConfig = {
components: {
Button: {
// 支持 number
padding: 10,
margin: 10,
// 支持 string
padding: "10px 20px 30px 40px",
margin: "5px 10px 15px 20px",
},
},
};
const App = () => (
<ConfigProvider theme={themeConfig}>
<Button>Custom Button</Button>
</ConfigProvider>
);
export default App;
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,545,142,759 | vscode | Semantic tokens types do not override "standard token type" and "font style" | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: `1.93.1`
- OS Version: Windows 11 Pro
Context: I'm working on a project to automatically generate TextMate grammars (for use in VS Code) from general context-free grammars (where productions can be annotated with scope names). As TextMate grammars are less expressive than CFGs, the generated TextMate grammars sometimes give rise to tokenization mistakes (relative to the original CFG). To alleviate this, there is an additional LSP server with a semantic highlighter (accurate CFG parser) to correct tokenization mistakes. The issue is that not all corrections provided by the semantic highlighter seem to take effect. I've tried to make a minimal example to reproduce the issue, using just the standard semantic highlighter for TypeScript and a small synthetic TextMate grammar for TS (in a separate extension).
Steps to Reproduce:
1. Create a new extension with the following files:
- `package.json`
```json
{
...
"contributes": {
"languages": [
{
"id": "typescript",
"aliases": ["TypeScript"],
"extensions": [".ts"]
}
],
"grammars": [
{
"language": "typescript",
"scopeName": "source.typescript",
"path": "./syntaxes/typescript.tmLanguage.json"
}
]
}
}
```
- `syntaxes/typescript.tmLanguage.json`
```json
{
"scopeName": "source.typescript",
"patterns": [
{
"match": "foo",
"name": "comment"
}
]
}
```
2. Run the extension (with all other extensions disabled)
3. Create the following file `test.ts`:
```
const foo = 5;
```
4. Set theme `Solarized Light` (or any other theme that italicizes comments)
5. Open the scope inspector:

**Expected:**
- "standard token type" is `Other` (I'd expect "semantic token type" to take precedence over the "textmate scope")
- "font style" is absent (as the strikethrough in the screenshot suggests), so `foo` is not italicized
**Actual:**
- "standard token type" is `Comment`
- "font style" is present and `italic` (if this is the intended behavior, then the strikethrough could be improved to better reflect this), so `foo` is italicized
| bug,grammar,semantic-tokens | low | Critical |
2,545,205,706 | rust | Slow code generated for _mm256_mulhi_epi16 | I suspect this is an issue in upstream LLVM. The sse2 version and the unsigned version (`_mm256_mulhi_epu16`) show the same problem. If a wider register is available (xmm -> ymm -> zmm) that will be used instead of splitting the values between 2 different ones.
### Code
https://godbolt.org/z/9Eqb45Keq
I tried this code:
```rust
pub unsafe fn bad(a: __m256i) -> __m256i {
let a = _mm256_and_si256(a, _mm256_set1_epi16(0x7FFF));
_mm256_mulhi_epi16(a, _mm256_set1_epi16(1000))
}
```
I expected to see this happen: more or less the same codegen as with a -1000 in multiplier
Instead, this happened: it looks like the vector is widened to i32 for no good reason.
### Version it worked on
It most recently worked on: Rust 1.74
### Version with regression
I checked on godbolt with 1.75-1.81 and whatever beta and nightly are today.
<!--
If you know when this regression occurred, please add a line like below, replacing `{channel}` with one of stable, beta, or nightly.
@rustbot modify labels: +regression-from-stable-to-{channel} -regression-untriaged
-->
| A-LLVM,I-slow,P-medium,regression-untriaged,llvm-fixed-upstream,C-optimization | low | Major |
2,545,226,507 | rust | Bug using | more with Rust programs | https://github.com/01mf02/jaq/issues/183#issuecomment-2370359796
Hi, using Rust programs ([jaq](https://github.com/01mf02/jaq) is this case) with `|more` on Win 7 give a wrong output:
`curl -s https://theunitedstates.io/congress-legislators/legislators-current.json | jaq.exe --color=never .[] | more`
shows:
````
?┼?=>?┼┼????=>?????
┼????????:?┼┼????????┼???????????┼???????>???????┼??????>????┼???>?┼┼┼??????
┼┼??????
┼??┼┼????????┼?????=>?????????┼????????>???┼┼????????????????┼┼?????>???┼????>????┼????????????┼┼???
?????=>???????
??┼??=>?┼┼????????=?┼┼???>?????┼????????>????????┼?????>?┼┼?????>???????┼┼????>??┼?
????>?┼┼?┼┼┼??=>????┼┼????>???????┼┼┼???????????┼┼???=>?~?┼┼┼?????>??┼┼┼??????????┼┼?
┼?
┼┼????????┼┼┼????????????┼┼??=>???????┼┼┼???????
┼┼?????????┼┼????>??????┼??┼┼?┼┼┼??=>????┼┼????>???????┼┼┼???????????┼┼???=>?~?┼┼┼?????>??┼┼┼???????
???┼┼??┼??┼┼????????┼┼┼???????????
┼┼??=>???????┼┼┼????????┼┼?????????┼┼????>??????┼??┼┼?┼┼┼??=>???
┼┼????>???????┼┼┼???????????┼┼???=>?~?┼┼┼?????>??┼┼┼??????????┼┼?
┼?
┼┼????????┼┼┼????????????┼┼??=>???????┼┼┼???????
┼┼?????????┼┼????>??????┼┼┼??????????????????????┼??┼┼?┼┼┼??=>????┼┼????>???????┼┼┼???????????┼┼???=
>?~?┼┼┼?????>??┼┼┼??????????
┼┼???>??????????????????┼┼??┼??┼┼????????┼┼┼????????????┼┼??=>???????┼┼┼????????┼┼????>?
┼┼????>??????┼┼┼??????????????????┼┼?????>????????????????????????????┼┼???=>????????┼┼┼???????????
┼┼????????????????????????????
┼┼???????????????????????┼??┼┼?┼┼┼??=>???
┼┼????>???????┼┼┼???????????┼┼???=>?~?┼┼┼??????????
┼┼????>??┼┼???>????????????????┼┼┼?????????????????????????????????┼┼┼????????????
┼┼???>????????┼┼┼???????>????????????????????┼┼┼???=>????4?????????????┼┼????????????
-- Más --
````
More details on https://github.com/01mf02/jaq/issues/183
Cheers. | C-bug,O-windows-7 | low | Critical |
2,545,257,411 | ollama | qwen2.5coder /api/generate odd behavior when `suffix` is present but empty string. | ### What is the issue?
Please check the difference between
```
echo -e $(curl http://localhost:11434/api/generate -d '{
"model": "qwen2.5-coder:1.5b",
"prompt": "def fib(", "suffix": " "
}' | jq -s 'map(.response) | join("")')
```
and
```
echo -e $(curl http://localhost:11434/api/generate -d '{
"model": "qwen2.5-coder:1.5b",
"prompt": "def fib(", "suffix": ""
}' | jq -s 'map(.response) | join("")')
```
In the second case the model does not act as FIM as I would expect (ands the template suggests). In the first case it does.
Maybe this is intended but I would not have expected it.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.11 | bug | low | Minor |
2,545,333,239 | neovim | snippet: parsing error if placeholder or choice node contains special text | ### Problem
Parsing of snippet which contains "tabstop like" text inside placeholder fails with error when it should not.
### Steps to reproduce
1. `:lua vim.snippet.expand('a(${1:x$2})')`
2. Observe the error: `E5108: Error executing lua .../usr/share/nvim/runtime/lua/vim/lsp/_snippet_grammar.lua:177: snippet parsing failed`
### Expected behavior
Expand into `a(x$2)`, i.e. parse `x$2` as regular text (as it says in [LSP spec grammar](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#snippet_syntax)).
**Edit**: it is a bit more complicated than that and might depend on LSP specification version. See [this comment](https://github.com/neovim/neovim/issues/30495#issuecomment-2379408776).
### Nvim version (nvim -v)
NVIM v0.11.0-dev-771+g67d6b6f27
### Vim (not Nvim) behaves the same?
No (doesn't have this functionality)
### Operating system/version
EndeavourOS Linux x86_64 (6.10.10-arch1-1)
### Terminal name/version
ghostty
### $TERM environment variable
xterm-ghostty
### Installation
appimage | bug,snippet | low | Critical |
2,545,348,447 | ollama | OpenAI client expects embeddings to be base64 encoded string, not json array of floats | ### What is the issue?
v1/embeddings support was added recently, but the C# OpenAPI client expects the API to return a base64 string, containing the embeddings as 4 byte floating point values.
The deserializer eventually calls [this function](https://github.com/openai/openai-dotnet/blob/main/src/Custom/Embeddings/Embedding.cs#L107)
```csharp
private static ReadOnlyMemory<float> ConvertToVectorOfFloats(BinaryData binaryData)
{
ReadOnlySpan<byte> base64 = binaryData.ToMemory().Span;
// Remove quotes around base64 string.
if (base64.Length < 2 || base64[0] != (byte)'"' || base64[base64.Length - 1] != (byte)'"')
{
ThrowInvalidData();
}
base64 = base64.Slice(1, base64.Length - 2);
// Decode base64 string to bytes.
byte[] bytes = ArrayPool<byte>.Shared.Rent(Base64.GetMaxDecodedFromUtf8Length(base64.Length));
OperationStatus status = Base64.DecodeFromUtf8(base64, bytes.AsSpan(), out int bytesConsumed, out int bytesWritten);
if (status != OperationStatus.Done || bytesWritten % sizeof(float) != 0)
{
ThrowInvalidData();
}
```
While I find this... goofy to say the least, this API is not currently compatible with OpenAI's embedding endpoint because of this difference.
Relates to: #5285
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.11 | bug,api | low | Minor |
2,545,369,780 | flutter | AppBar scrolled under state not working when scrolling is inside PageView | ### Steps to reproduce
Run the attached code sample
### Expected results
`AppBar` shows elevation when `ListView` content is scrolled
### Actual results
`AppBar` shows no elevation when `ListView` content is scrolled. Instead it works when `ListView` is taken out of the `PageView`.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter issue',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const HomePageView(),
);
}
}
class HomePageView extends StatefulWidget {
const HomePageView({super.key});
@override
State<HomePageView> createState() => _HomePageViewState();
}
class _HomePageViewState extends State<HomePageView> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Scrolled under')),
body: PageView(
children: [
_buildLongListPage(),
_buildLongListPage(),
_buildLongListPage(),
],
),
);
}
Widget _buildLongListPage() {
return ListView(
children: List.generate(40, (index) {
return Card(
child: Padding(
padding: const EdgeInsets.all(16),
child: Text('Item $index'),
),
);
}),
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Versione 10.0.19045.4894], locale it-IT)
• Flutter version 3.24.1 on channel stable at C:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (5 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\Alessandro\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[!] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.4)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.11.35312.102
X Visual Studio is missing necessary components. Please re-run the Visual Studio installer for the "Desktop development with C++" workload, and include these components:
MSVC v142 - VS 2019 C++ x64/x86 build tools
- If there are multiple build tool versions available, install the latest
C++ CMake tools for Windows
Windows 10 SDK
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = C:\Program Files\Android\Android Studio
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] VS Code (version 1.71.2)
• VS Code at C:\Users\Alessandro\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.48.0
[√] Connected device (4 available)
• Pixel Fold (mobile) • 35181FDHS00206 • android-arm64 • Android 14 (API 34)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Versione 10.0.19045.4894]
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.138
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,f: scrolling,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.26 | low | Major |
2,545,388,201 | angular | new function for signal/rxjs interop | ### Which @angular/* package(s) are relevant/related to the feature request?
@angular/core
### Description
I propose some function to run some computation that returns an Observable, and convert it to a signal.
Typical workflow:
- inputs = signals
- when they change, run some computation (typically, http requests)
- convert to a signal
### Proposed solution
```typescript
export function computedFromObservable<T>(computation: () => Observable<T>, options?: CreateComputedOptions<Observable<T>>): Signal<T | undefined> {
const computedSignal = computed(computation, options);
const computedObservable = toObservable(computedSignal);
const observable = computedObservable.pipe(
switchAll()
);
return toSignal(observable);
}
```
which can be used this way:
```typescript
this.productPage = computedFromObservable(() => this.productService.find$(this.productSearch(), this.pagination()));
```
... replacing this more complicated construct:
```typescript
const productParams: Signal<[ProductSearch, Pagination]> = computed(() => [this.preparationSearch(), this.pagination()]);
const productPageObservable = toObservable(productParams).pipe(
switchMap(([productSearch, pagination]) => this.productService.find$(productSearch, pagination))
);
this.productPage = toSignal(productPageObservable);
```
### Alternatives considered
Originally, I something like this, but it lacked the power of `computed()` (single signal as input often requiring to create an additional computed() signal anyway, not as lazy, ...):
```typescript
export function pipeSignalWithDefault<T, U>(signal: Signal<T>, pipe: OperatorFunction<T, U>, initialValue: U): Signal<U> {
const signalAsObservable = toObservable(signal);
const signalPipe = signalAsObservable.pipe(pipe);
return toSignal(signalPipe, {initialValue});
}
``` | area: core,cross-cutting: observables,core: reactivity,cross-cutting: signals,core: rxjs interop | low | Major |
2,545,454,714 | kubernetes | Events can not reference objects that use a different name validation | Seen in https://github.com/kubernetes/kubernetes/issues/127588
> E0924 12:16:50.003934 1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Event \"fd00:10:233::103.17f82d400041d81f\" is invalid: metadata.name: Invalid value: \"fd00:10:233::103.17f82d400041d81f\":
a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])
?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')" event="&Event{ObjectMeta:{fd00:10:233::103.17f82d400041d81f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2024-09-24 12:16:50.002035963 +0000 UTC
m=+179.166665408,Series:nil,ReportingController:ipallocator-repair-controller,ReportingInstance:ipallocator-repair-controller-svc-control-plane,Action:IPAddressAllocation,Reason:IPAddressNotAllocated,Regarding:{IPAddress fd00:1
0:233::103 86ced933-1da5-4fac-b2e3-ab849185e2d5 networking.k8s.io/v1beta1 7070 },Related:nil,Note:IPAddress: fd00:10:233::103 for Service default/v6-service appears to have leaked: cleaning up,Type:Warning,DeprecatedSource:{ },D
eprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
An IPAddress object can contain an IPv6 address `fd00:10:233::103`
The event recorder creates a new Event object based on the referenced object
> r.recorder.Eventf(ipAddress, nil, v1.EventTypeWarning, "IPAddressNotAllocated", "IPAddressAllocation", "IPAddress %s appears to have been modified, not referencing a Service %v: cleaning up", ipAddress.Name, ipAddress.Spec.ParentRef)
https://github.com/kubernetes/kubernetes/blob/464a994a10b71c45583f3426fd970291f8a5b756/staging/src/k8s.io/client-go/tools/events/event_recorder.go#L67-L75
I can see two options:
1. adapt the controller code to normalize the IPv6 addresses to meet the Event name convention
2. modify `r.recorder.Eventf` to make valid names on the Events objects
The result will be the same with 1 or 2, the difference is that option 2. will be valid for all the objects that use names not valid for the existing Events validation
/sig instrumentation
| sig/instrumentation,triage/accepted | low | Major |
2,545,473,480 | neovim | diagnostics: vim.diagnostic.is_shown(), vim.diagnostic.is_hidden() or something similar to test if there are hidden diagnostics | ### Problem
Currently you can't check if there are diagnostics that were hidden via
`vim.diagnostic.hide()`, which makes it impossible to toggle between
diagnostics being hidden/shown.
### Expected behavior
I would expect there to be an API to check that, considering there is such an
API for `vim.diagnostic.enabled()` by using `vim.lsp.is_enabled()`. | enhancement,diagnostic | low | Minor |
2,545,513,598 | vscode | my code running so slow | Type: <b>Performance Issue</b>
so bassicly while writing the code and use the vs code is really fast or at least i don't notice anything but when i want to run my code it's really took a lot of time for exemple for a basic command such print("hello") it will take me 10 seconds wich is very slow considering the task i woud really appreciate in kind of answers that may stop and fix this issue .
thank you in advance.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz (8 x 1896)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.88GB (0.74GB free)|
|Process Argv|--crash-reporter-id dc7a43be-909a-455a-808f-a7a39a389875|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 120 12132 code main
0 176 3744 gpu-process
0 146 7360 extensionHost [1]
0 7 8328 "c:\Users\Abdo Store\.vscode\extensions\ms-python.python-2024.14.1-win32-x64\python-env-tools\bin\pet.exe" server
0 10 16628 C:\WINDOWS\system32\conhost.exe 0x4
0 138 13712 electron-nodejs (bundle.js )
0 81 8648 shared-process
0 45 16984 fileWatcher [1]
0 17 20444 crashpad-handler
0 75 21364 ptyHost
0 7 10420 conpty-agent
0 64 11000 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\Abdo Store\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\browser\media\shellIntegration.ps1\" } catch {}"
0 7 14088 conpty-agent
0 64 17008 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\Abdo Store\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\browser\media\shellIntegration.ps1\" } catch {}"
0 70 21160 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\Abdo Store\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\browser\media\shellIntegration.ps1\" } catch {}"
0 18 2288 "C:\Users\Abdo Store\AppData\Local\Microsoft\WindowsApps\python3.12.exe" "c:/Users/Abdo Store/Untitled-2.py"
0 7 21752 conpty-agent
0 28 21860 utility-network-service
0 207 21972 window [1] (Untitled-2.py - Visual Studio Code)
```
</details>
<details>
<summary>Workspace Info</summary>
```
;
```
</details>
<details><summary>Extensions (3)</summary>
Extension|Author (truncated)|Version
---|---|---
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.9.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
f3je6385:31013174
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
cf971741:31144450
ei213698:31121563
iacca2:31144504
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,545,517,919 | rust | [QNX] `Backtrace::{capture,force_capture}` triggers OOM on ARM64 when called from a thread other than `main` | The following program triggers an OOM condition when executed on a `aarch64-unknown-nto-qnx710` target. The `x86_64-pc-nto-qnx710` target is not affected by the issue. Other QNX targets have not been checked.
``` rust
use std::backtrace::Backtrace;
// does OOM
fn main() {
let handle = std::thread::spawn(|| {
let trace = Backtrace::force_capture();
});
handle.join().unwrap();
}
```
``` console
$ ./gh130784
memory allocation of 3758096384 bytes failed
```
Calling the `Backtrace::force_capture` function from the `main` function / thread does not result in an OOM condition.
``` rust
// no OOM
fn main() {
let trace = Backtrace::force_capture();
println!("{trace}");
}
```
``` console
$ ./gh130784-workaround
(..)
2: std::backtrace::Backtrace::create
at ./rustc/1.83.0/library/std/src/backtrace.rs:331:13
3: error_with_backtrace_outputs_correctly::main
4: std::sys::backtrace::__rust_begin_short_backtrace
5: std::rt::lang_start::{{closure}}
(..)
```
The issue also affects the `Backtrace::capture` API when the `RUST_BACKTRACE` env var is set:
``` rust
use std::backtrace::Backtrace;
// does OOM
fn main() {
std::env::set_var("RUST_BACKTRACE", "1");
let handle = std::thread::spawn(|| {
let trace = Backtrace::capture();
});
handle.join().unwrap();
}
```
It's also worth noting that the built-in backtrace support (`RUST_BACKTRACE=1`) works fine on `aarch64-unknown-nto-qnx710` since the `/tests/ui/backtrace` tests pass. Also, backtraces from panicking threads are printed as expected.
``` rust
// no OOM
fn main() {
std::env::set_var("RUST_BACKTRACE", "1");
let handle = std::thread::spawn(|| {
foo();
});
assert!(handle.join().is_err());
}
#[inline(never)]
fn foo() {
panic!("backtrace works: {:x}", foo as fn() as usize);
}
```
``` console
$ ./rust-backtrace-works
thread '<unnamed>' panicked at <some-path>.rs:28:5:
backtrace works: 20223d58a8
stack backtrace:
0: rust_begin_unwind
at ./rustc/1.83.0/library/std/src/panicking.rs:665:5
1: core::panicking::panic_fmt
at ./rustc/1.83.0/library/core/src/panicking.rs:74:14
2: <crate_name>::foo
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
## Changelog
- (2024-09-24 19:50:38+02:00) note that `Backtrace::capture` is also affected. note that the built-in `RUST_BACKTRACE` functionality is not affected when other threads panics.
- (2024-09-24 19:27:45+02:00) minimized repro example to a single call to `Backtrace::force_capture`. before it was replicating the `std::error::tests::error_with_backtrace_outputs_correctly_with_one_source` function which uses unstable libstd API.
cc @flba-eb @gh-tr @jonathanpallant (QNX 7.1 maintainers)
FYI @nyurik this may be relevant to the `aarch64-unknown-nto-qnx700` target | C-bug,T-libs,O-neutrino | low | Critical |
2,545,523,978 | storybook | Addon Test: Viewport not always respected? | We have an issue in the rsc demo.
To reproduce, check out this PR:
https://github.com/storybookjs/storybook-rsc-demo/pull/15
And run:
`pnpm exec vitest run`
You will get the following error:
```
pnpm exec vitest run
(node:4166) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
Re-optimizing dependencies because lockfile has changed
RUN v2.0.5 /Users/kasperpeulen/code/github/rsc-demo
Browser runner started at http://localhost:5173/
4:05:10 PM [vite] ✨ new dependencies optimized: @storybook/experimental-addon-test/internal/test-utils, react/jsx-dev-runtime, next/image, next/dist/compiled/react, sb-original/default-loader, sb-original/image-context
4:05:10 PM [vite] ✨ optimized dependencies changed. reloading
stderr | components/sidebar.stories.tsx > NoteChangedAnimation
Warning: The current testing environment is not configured to support act(...)
✓ components/auth-button.stories.tsx (2)
✓ components/logout-button.stories.tsx (1)
✓ components/note-editor.stories.tsx (2)
✓ components/note-list-skeleton.stories.tsx (1)
✓ components/note-list.stories.tsx (2)
✓ components/note-preview.stories.tsx (1)
✓ components/note-ui.stories.tsx (3)
✓ components/search.stories.tsx (3)
❯ components/sidebar.stories.tsx (5) 15035ms
✓ Default
✓ Empty
✓ NotesExpanded
× NoteChangedAnimation 15002ms
× ToggleSidebarOnMobile
✓ app/note/edit/page.stories.tsx (3) 731ms
✓ app/note/[id]/page.stories.tsx (7) 748ms
✓ app/note/edit/[id]/page.stories.tsx (4) 1076ms
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Failed Tests 2 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
FAIL components/sidebar.stories.tsx > NoteChangedAnimation
Error: Test timed out in 15000ms.
If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout".
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[1/2]⎯
FAIL components/sidebar.stories.tsx > ToggleSidebarOnMobile
TestingLibraryElementError: Unable to find an accessible element with the role "menubar"
There are no accessible roles. But there might be some inaccessible roles. If you wish to access them, then set the `hidden` option to `true`. Learn more about this here: https://testing-library.com/docs/dom-testing-library/api-queries#byrole
Ignored nodes: comments, script, style
<div />
❯ play components/sidebar.stories.tsx:69:31
67| },
68| play: async ({ canvas, step, userEvent }) => {
69| const searchInput = canvas.getByRole('menubar')
| ^
70|
71| await step('Sidebar is initially visible', async () => {
❯ ../../../../../node_modules/.vite/deps/@storybook_experimental-addon-test_internal_test-utils.js:51:98
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[2/2]⎯
Test Files 1 failed | 11 passed (12)
Tests 2 failed | 32 passed (34)
Start at 16:05:06
Duration 21.76s (transform 13ms, setup 6.60s, collect 1.71s, tests 18.35s, environment 0ms, prepare 2.52s)
```
It is hard to debug, because the errors goes away when running with headless: false.
My suspicion is that it is related to viewports. Because the element that is not found, is only visible in the mobile viewport. | bug,addon: test | low | Critical |
2,545,537,363 | langchain | AzureMLChatOnlineEndpoint for BLOOM with CustomOpenAIChatContentFormatter HTTPError: HTTP Error 424: Failed Dependency | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_models.azureml_endpoint import (
AzureMLEndpointApiType,
CustomOpenAIChatContentFormatter,
AzureMLChatOnlineEndpoint,
AzureMLBaseEndpoint
)
from langchain_core.messages import HumanMessage,SystemMessage
llm = AzureMLChatOnlineEndpoint(
endpoint_url="https://myproject.xxx.inference.ml.azure.com/score",
endpoint_api_type=AzureMLEndpointApiType.dedicated,
endpoint_api_key=os.environ["BLOOM_API_KEY"],
content_formatter=CustomOpenAIChatContentFormatter(),
model_kwargs={"temperature": 0.5}
)
response = llm.invoke(
[
("system", "You are an AI tasked to convert complex text into simpler, more readable text."),
("user", "Hi")
]
)
print(response)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
Cell In[88], [line 18](vscode-notebook-cell:?execution_count=88&line=18)
[7](vscode-notebook-cell:?execution_count=88&line=7) from langchain_core.messages import HumanMessage,SystemMessage
[10](vscode-notebook-cell:?execution_count=88&line=10) llm = AzureMLChatOnlineEndpoint(
[11](vscode-notebook-cell:?execution_count=88&line=11) endpoint_url="https://bloom7b.swedencentral.inference.ml.azure.com/score",
[12](vscode-notebook-cell:?execution_count=88&line=12) endpoint_api_type=AzureMLEndpointApiType.dedicated,
(...)
[15](vscode-notebook-cell:?execution_count=88&line=15) model_kwargs={"temperature": 0.5}
[16](vscode-notebook-cell:?execution_count=88&line=16) )
---> [18](vscode-notebook-cell:?execution_count=88&line=18) response = llm.invoke(
[19](vscode-notebook-cell:?execution_count=88&line=19) [
[20](vscode-notebook-cell:?execution_count=88&line=20) ("system", "You are an AI tasked to convert complex text into simpler, more readable text."),
[21](vscode-notebook-cell:?execution_count=88&line=21) ("user", "Hi")
[22](vscode-notebook-cell:?execution_count=88&line=22) ]
[23](vscode-notebook-cell:?execution_count=88&line=23) )
[25](vscode-notebook-cell:?execution_count=88&line=25) print(response)
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:284, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
[273](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:273) def invoke(
[274](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:274) self,
[275](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:275) input: LanguageModelInput,
(...)
[279](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:279) **kwargs: Any,
[280](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:280) ) -> BaseMessage:
[281](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:281) config = ensure_config(config)
[282](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:282) return cast(
[283](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:283) ChatGeneration,
--> [284](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:284) self.generate_prompt(
[285](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:285) [self._convert_input(input)],
[286](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:286) stop=stop,
[287](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:287) callbacks=config.get("callbacks"),
[288](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:288) tags=config.get("tags"),
[289](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:289) metadata=config.get("metadata"),
[290](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:290) run_name=config.get("run_name"),
[291](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:291) run_id=config.pop("run_id", None),
[292](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:292) **kwargs,
[293](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:293) ).generations[0][0],
[294](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:294) ).message
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:784, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[776](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:776) def generate_prompt(
[777](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:777) self,
[778](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:778) prompts: list[PromptValue],
(...)
[781](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:781) **kwargs: Any,
[782](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:782) ) -> LLMResult:
[783](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:783) prompt_messages = [p.to_messages() for p in prompts]
--> [784](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:784) return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:641, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[639](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:639) if run_managers:
[640](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:640) run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> [641](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:641) raise e
[642](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:642) flattened_outputs = [
[643](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:643) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[644](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:644) for res in results
[645](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:645) ]
[646](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:646) llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:631, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[628](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:628) for i, m in enumerate(messages):
[629](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:629) try:
[630](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:630) results.append(
--> [631](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:631) self._generate_with_cache(
[632](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:632) m,
[633](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:633) stop=stop,
[634](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:634) run_manager=run_managers[i] if run_managers else None,
[635](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:635) **kwargs,
[636](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:636) )
[637](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:637) )
[638](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:638) except BaseException as e:
[639](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:639) if run_managers:
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:853, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[851](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:851) else:
[852](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:852) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [853](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:853) result = self._generate(
[854](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:854) messages, stop=stop, run_manager=run_manager, **kwargs
[855](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:855) )
[856](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:856) else:
[857](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:857) result = self._generate(messages, stop=stop, **kwargs)
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_community\chat_models\azureml_endpoint.py:277, in AzureMLChatOnlineEndpoint._generate(self, messages, stop, run_manager, **kwargs)
[272](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:272) _model_kwargs["stop"] = stop
[274](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:274) request_payload = self.content_formatter.format_messages_request_payload(
[275](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:275) messages, _model_kwargs, self.endpoint_api_type
[276](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:276) )
--> [277](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:277) response_payload = self.http_client.call(
[278](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:278) body=request_payload, run_manager=run_manager
[279](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:279) )
[280](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:280) generations = self.content_formatter.format_response_payload(
[281](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:281) response_payload, self.endpoint_api_type
[282](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:282) )
[283](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/chat_models/azureml_endpoint.py:283) return ChatResult(generations=[generations])
File c:\POC\sandbox\notebooks-for-testing\.venv\Lib\site-packages\langchain_community\llms\azureml_endpoint.py:57, in AzureMLEndpointClient.call(self, body, run_manager, **kwargs)
[54](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:54) headers["azureml-model-deployment"] = self.deployment_name
[56](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:56) req = urllib.request.Request(self.endpoint_url, body, headers)
---> [57](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:57) response = urllib.request.urlopen(
[58](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:58) req, timeout=kwargs.get("timeout", self.timeout)
[59](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:59) )
[60](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:60) result = response.read()
[61](file:///C:/POC/sandbox/notebooks-for-testing/.venv/Lib/site-packages/langchain_community/llms/azureml_endpoint.py:61) return result
File ~\.pyenv\pyenv-win\versions\3.11.3\Lib\urllib\request.py:216, in urlopen(url, data, timeout, cafile, capath, cadefault, context)
[214](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:214) else:
[215](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:215) opener = _opener
--> [216](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:216) return opener.open(url, data, timeout)
File ~\.pyenv\pyenv-win\versions\3.11.3\Lib\urllib\request.py:525, in OpenerDirector.open(self, fullurl, data, timeout)
[523](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:523) for processor in self.process_response.get(protocol, []):
[524](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:524) meth = getattr(processor, meth_name)
--> [525](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:525) response = meth(req, response)
[527](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:527) return response
File ~\.pyenv\pyenv-win\versions\3.11.3\Lib\urllib\request.py:634, in HTTPErrorProcessor.http_response(self, request, response)
[631](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:631) # According to RFC 2616, "2xx" code indicates that the client's
[632](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:632) # request was successfully received, understood, and accepted.
[633](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:633) if not (200 <= code < 300):
--> [634](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:634) response = self.parent.error(
[635](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:635) 'http', request, response, code, msg, hdrs)
[637](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:637) return response
File ~\.pyenv\pyenv-win\versions\3.11.3\Lib\urllib\request.py:563, in OpenerDirector.error(self, proto, *args)
[561](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:561) if http_err:
[562](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:562) args = (dict, 'default', 'http_error_default') + orig_args
--> [563](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:563) return self._call_chain(*args)
File ~\.pyenv\pyenv-win\versions\3.11.3\Lib\urllib\request.py:496, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args)
[494](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:494) for handler in handlers:
[495](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:495) func = getattr(handler, meth_name)
--> [496](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:496) result = func(*args)
[497](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:497) if result is not None:
[498](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:498) return result
File ~\.pyenv\pyenv-win\versions\3.11.3\Lib\urllib\request.py:643, in HTTPDefaultErrorHandler.http_error_default(self, req, fp, code, msg, hdrs)
[642](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:642) def http_error_default(self, req, fp, code, msg, hdrs):
--> [643](https://file+.vscode-resource.vscode-cdn.net/c%3A/POC/sandbox/notebooks-for-testing/ai-agents/~/.pyenv/pyenv-win/versions/3.11.3/Lib/urllib/request.py:643) raise HTTPError(req.full_url, code, msg, hdrs, fp)
HTTPError: HTTP Error 424: Failed Dependency
### Description
Trying to connect to BLOOM through the AZURE AI Studio deployed endpoint
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.5
> langchain: 0.3.0
> langchain_community: 0.3.0
> langsmith: 0.1.125
> langchain_huggingface: 0.1.0
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> huggingface-hub: 0.25.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.47.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> sentence-transformers: 3.1.1
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> tokenizers: 0.19.1
> transformers: 4.44.2 | 🤖:bug | low | Critical |
2,545,538,163 | vscode | Not possible to DnD an image into inline chat | Testing #229263
Linux: DnD an image into inline chat opens the image in the editor instead of adding it as an attachment.
Also same for quick chat
cc. @benibenj
| feature-request,editor-drag-and-drop,inline-chat | low | Major |
2,545,560,860 | rust | TypeId exposes placeholders type generics with `-Znext-solver` | So, this isn't necessarily a bug, it's just interesting behavior and I'm not sure if it's intended:
```rust
#![feature(const_type_id, generic_const_exprs)]
const fn type_id<T: 'static>() -> u128 {
unsafe { std::mem::transmute::<_, u128>(std::any::TypeId::of::<T>()) }
}
const fn c<X: 'static>() -> usize {
// A: the evaluated program panicked at '63102938254854923095314763309601158984'
const_panic::concat_panic!(type_id::<X>());
}
pub fn u<X: 'static>() {
let _: [(); c::<X>()];
println!("{}", type_id::<X>()); // B
}
```
This happens without even calling `u`. The values printed at A and B (by commenting out the call to `c` and adding a call to `u` with any arbitrary type) are different. I'm posting this issue because I'm not sure if there's ICE potential here, for values being different in a const vs runtime context.
I stumbled across this while looking for a way to make a "commutative type" i.e. where `Com<A, B>` and `Com<B, A>` are the same type. It only works in `-Znext-solver` and I believe that's due to `-Znext-solver` eagerly evaluating stuff using these placeholder types?
```rust
#![feature(const_type_id, core_intrinsics, generic_const_exprs)]
#![allow(warnings)]
struct Foo; struct Bar;
struct Bool<const B: bool>;
trait TrueOrFalse {
type Ite<Then, Else>;
}
impl TrueOrFalse for Bool<true> {
type Ite<Then, Else> = Then;
}
impl TrueOrFalse for Bool<false> {
type Ite<Then, Else> = Else;
}
type Ite<const B: bool, T, E> = <Bool<B> as TrueOrFalse>::Ite<T, E>;
trait Map<T: 'static, U: 'static> where {
type Output;
}
impl<T: 'static, U: 'static> Map<T, U> for (T, U) {
type Output = Ite<{
std::intrinsics::type_id::<T>() > std::intrinsics::type_id::<U>()
}, (U, T), (T, U)>;
}
pub fn u<X: 'static, Y: 'static>() {
let _x: <(X, Y) as Map<X, Y>>::Output = loop {};
let mut _y: <(Y, X) as Map<Y, X>>::Output = loop {};
_y = _x;
}
pub fn f() {
u::<Foo, Bar>();
}
```
If this *does* happen to be fixed in a way which breaks the code above it would be nice to have a (perma-unstable) way to order types.
# Meta
```
rustc 1.79.0-nightly (aed2187d5 2024-04-27)
binary: rustc
commit-hash: aed2187d53b8789e3a37f50ae36f894a2a679077
commit-date: 2024-04-27
host: x86_64-pc-windows-msvc
release: 1.79.0-nightly
LLVM version: 18.1.4
``` | C-bug,requires-nightly,F-generic_const_exprs | low | Critical |
2,545,571,924 | vscode | Should opening "getting started with accessibility features" walkthrough start reading out the first block? | Testing #228766
I found it strange that the first block isn't read to the user when the walkthrough is opened. I also don't get info about the buttons present within the step "block". Not very experienced with voiceover, maybe that's expected.
https://github.com/user-attachments/assets/bcef7b0c-76a1-4189-9c82-abdfb11ed869
Version: 1.94.0-insider
Commit: f35c3823e3b7ea4c52f7fee4659bcce39b42ce9e
Date: 2024-09-24T05:04:12.797Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0 | accessibility,under-discussion | low | Minor |
2,545,574,834 | pytorch | Wrong results when sampling from the beta distribution for small alpha=beta | ### 🐛 Describe the bug
There seems to be a numerical problem when sampling from `torch.distributions.Beta` with $\alpha=\beta\to 0$:
```py
import torch
import matplotlib.pyplot as plt
x = 1/1000
pdf = torch.distributions.Beta(torch.tensor([x]), torch.tensor([x]))
samples = pdf.sample((10_000,)).numpy()
plt.hist(samples, bins=10, density=True)
```
results in

This appears wrong, as the density shouldn't have any mass around 1/2 when the two parameters approach zero.
### Versions
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Fedora Linux 39 (Thirty Nine) (x86_64)
GCC version: (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)
Clang version: 17.0.6 (Fedora 17.0.6-2.fc39)
CMake version: version 3.27.7
Libc version: glibc-2.38
Python version: 3.12.4 (main, Jun 7 2024, 00:00:00) [GCC 13.3.1 20240522 (Red Hat 13.3.1-1)] (64-bit runtime)
Python platform: Linux-6.9.9-100.fc39.x86_64-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 8000
Nvidia driver version: 555.58.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
Stepping: 7
CPU(s) scaling MHz: 26%
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (36 instances)
L1i cache: 1.1 MiB (36 instances)
L2 cache: 36 MiB (36 instances)
L3 cache: 49.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.0
[pip3] torchvision==0.18.0
[pip3] torchviz==0.0.2
[conda] Could not collect
cc @fritzo @neerajprad @alicanb @nikitaved | module: numerical-stability,module: distributions,triaged | low | Critical |
2,545,622,064 | ant-design | Typography component not longer inherits font in v5 | ### Reproduction link
[](https://stackblitz.com/edit/antd-reproduce-5x-fguhno?file=demo.tsx)
### Steps to reproduce
See reproduction link for v5 (https://stackblitz.com/edit/antd-reproduce-5x-fguhno?file=demo.tsx) and
v4 (https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-vc2m69?file=%2Findex.js%3A10%2C46).
In v5 text in typography component has 14px, because this is what generated for typography component. In v4 though it has 20px, because of the parent.
### What is expected?
I would expect some switch in theme config for Typography component, that would allow to return back to this behavior.
### What is actually happening?
Font size is not inherited in Typography component.
| Environment | Info |
| --- | --- |
| antd | 5.19.3 |
| React | 18.3.1 |
| System | macOS Sonoma 14.5 |
| Browser | Google Chrome Version 126.0.6478.127 (Official Build) (arm64) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,545,625,517 | material-ui | [material-ui][Select] selecting using the spacebar does not work | ### Steps to reproduce
Link to live example: https://mui.com/material-ui/react-select/
Steps:
Try selecting an option in Mui select using spacebar. It only works with enter.
This is a bad experience for keyboard users. Most of them expect the spacebar to work here. Escpecially with multi select is unpleasant.
With Autocomplete it works fine.
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: spacebar, select | bug 🐛,accessibility,component: select,package: material-ui | low | Minor |
2,545,641,314 | ant-design | Not possible to specify one axis padding for theme | ### Reproduction link
[](https://stackblitz.com/edit/antd-reproduce-5x-eojitb?file=demo.tsx)
### Steps to reproduce
Inspect the dropdown of the select.
### What is expected?
It should be possible to specify padding in a form '0 20px', so that only one of the axis is affected.
### What is actually happening?
Since I can only specify a number, it is not possible to style only one axis.
| Environment | Info |
| --- | --- |
| antd | 5.19.3 |
| React | 18.3.1 |
| System | macOS Sonoma 14.5 |
| Browser | Google Chrome Version 126.0.6478.127 (Official Build) (arm64) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Major |
2,545,709,096 | vscode | Image preview transparency shows triangles at certain zoom levels | 
Zoomed:

| bug,help wanted,image-preview | low | Minor |
2,545,714,509 | vscode | Brackets should not be colorized in comments | Testing #229392
Default:

Tree sitter:

Theme is Dark Modern | bug,tree-sitter | low | Minor |
2,545,775,650 | node | Slow performances when running tests with `--experimental-test-coverage` | ### Version
v22.9.0
### Platform
```text
Darwin N4V4PGFGPT 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:16:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8112 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Run tests with `--experimental-test-coverage` option. Comparing it to other solutions (f.e. running Node tests and getting the coverage with `nyc`) this is way slower. For instance, on a repo with ~800 tests, collecting coverage with `nyc` takes 77s, while running with `--experimental-test-coverage` takes 148s.
### How often does it reproduce? Is there a required condition?
I can reproduce it every time I run the Node tests with the native test coverage option.
### What is the expected behavior? Why is that the expected behavior?
The performances should be at least equal (ideally better) than with 3rd party dependencies (like `nyc`).
### What do you see instead?
An average time spent per test of 185ms vs 96ms.
### Additional information
I tried running with/without additional options (f.e. `--test-coverage-exclude`, `--test-coverage-include`, `--test-coverage-branches`, `--test-coverage-functions`, `--test-coverage-lines`) but it didn't help. The performance issue seems only to be related to the algorithm that's triggered by the `--experimental-test-coverage` option. | coverage,test_runner | low | Critical |
2,545,790,596 | material-ui | [material-ui][Select] Aria-controls references invalid id when not expanded | ### Steps to reproduce
Aria-controls references invalid id when not expanded mui select component

Link to live example: (required)
Go to: https://mui.com/material-ui/react-select/
Steps:
1. Download ARC Toolkit: https://chromewebstore.google.com/detail/arc-toolkit/chdkkkccnlfncngelccgbgfmjebmkmce?hl=en and install it
2. Expand Inspect and select ARC Toolkit and Run tests on https://mui.com/material-ui/react-select/
3. Look into the error for "ARIA attribute value is incorrect
Description: The value :R9alal9h9l6kud6: is not allowed on the aria-controls attribute(s)."
It is also observed using access-assistant also.
PFA the ARC snapshot:
### Current behavior
ARIA attribute value is incorrect is observed on the select component upon ADA Testing using automation tools such as ARC.
### Expected behavior
Aria-control should refer to the id of the menu-item, in non-expanded state also.
### Context
_No response_
### Your environment
_No response_
**Search keywords**: select | accessibility,component: select,package: material-ui | low | Critical |
2,545,812,044 | pytorch | [RFC] Integrate NCCL scalable init API | ### 🚀 The feature, motivation and pitch
API available in NCCL 2.23.
Uses multi-root to speed up bootstrap.
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Minor |
2,545,814,121 | vscode | [tunnels] Remote Tunnels stays active after logging out | Testing #229414
Not sure whether this is expected
- Log into account
- Enable remote tunnels and open it in vscode.dev
- Log out
- Remote tunnels is still active until I manually turn it off


| polish,remote-tunnel | low | Minor |
2,545,853,772 | vscode | disambiguate links in chat with the same name | Testing #229436
similar to https://github.com/microsoft/vscode/issues/229511, but in the chat response as well

| bug,panel-chat | low | Minor |
2,545,874,374 | TypeScript | Function expression or method is not inferable when we have mapped type with a conditional type | ### 🔎 Search Terms
conditional method arrow function
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ
### ⏯ Playground Link
[Playground Link](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAsghmAPAFQHxQLxSgbygKGwCcI4ATAewDsAbEKAbQAUoBLKqAawhAoDMoyALoAuQcyFQIAD2AQqZAM5QARhQo1SHAPxQARGo1a9Y-RWAALCET34AvgRlgKRYFD4BXKgGNgraqoerDRkAMIUALYRcAqIAGJSsvJKUABKEN4uZIiKwETsAOYANFBenFQUAO5UqKgAFCpBIdYAgkQFimJ4hO5e3mJ1YGJxAJSY6ABuFKxkPWBwRHARnbAIiAByFACSVHzW8bX2Yzj49vigkFBx6pi47upihpoxUHYA3PgA9J9XfX4BThIikU-g4lAgygqbkqLk4+EawTCkWiCjqJ2wnh8YkxvlBUEGw3Ux1eRTmCyWK3R2HuFDEBnUzyotmwdiOp2+sAglgoKUoVAA5NDYfCmkiojEyGiejiCVciXc7KTsPNFssuj0MQ99E9jD1WXYRuyfm0iFVej5-hwYUROIoRYjwuLUVScQMhnKKGMMOg8IqyarKRqaXSdTFma8jkA)
### 💻 Code
```ts
type Map<T> = {
readonly [P in keyof T]: T[P] extends boolean ? "boolean": "other"
}
export function buildCommand<F extends Record<string, unknown>>(builderArgs: {
func: (p: F) => void
params: Map<NoInfer<F>>
}) {
}
type Foo = { foo: boolean };
// Function expression does not work
buildCommand({
func: function (p: Foo) { },
params: {
foo: "boolean"
}
})
// Methods don't work
buildCommand({
func(p: Foo) { },
params: {
foo: "boolean"
}
})
// Arrow function works
buildCommand({
func: (p: Foo) => { },
params: {
foo: "boolean"
}
})
```
### 🙁 Actual behavior
For the first two calls we get an error while the third one is successful
### 🙂 Expected behavior
All three calls should be successful
### Additional information about the issue
Removing the constraint from `F` ([Playground Link](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAsghmAPAFQHxQLxSgbygKGwCcI4ATAewDsAbEKAbQAUoBLKqAawhAoDMoyALoAuQcyFQIAD2AQqZAM5QARhQo1SHAPxQARGo1a9Y-RWAALCET34AvgRlgKRYFD4BXKgGNgraqoerDRkAMIUALYRcAqIAGKoABQqQSHWAIJEAOaKYniE7l7eYolgYnEAlJjoAG4UrGQFYHBEcBG5sAiIAHIUAJJUfNbxqKj2VTj49vigkFBx6pi47upihpoxUHYA3PgA9HvzRX4BTiSKiv4clBDKVOZQAO4unPgpwWGR0QqJk9iePjEAN8VygpXK6gmWwANE0Wm0On9sCsKGIDOoNlRbNg7OMpgdYBBLBQlFBKFQAORuZ5EV7vELhKIxMi-ArA8HzSHLOyw7DNVrtPIFf6rfTrYwFXF2Cr4w6ZIgUR6FHwnDg0ziKN6pT5Mn5I4ElMqcihVDDoPA8uECxHClFo8UxbFbcZAA)) or changing it to `F extends Record<keyof F, unknown>` ([Playground Link](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAsghmAPAFQHxQLxSgbygKGwCcI4ATAewDsAbEKAbQAUoBLKqAawhAoDMoyALoAuQcyFQIAD2AQqZAM5QARhQo1SHAPxQARGo1a9Y-RWAALCET34AvgRlgKRYFD4BXKgGNgraqoerDRkAMIUALYRcAqIAGJSsvJKUABKEN4uZIjcvAIJAGRQisBE7ADmADRQXpxUFADuVKioABQqQSHWAIJE5YpieITuXt5irWBicQCUmOgAbhSsZMNgcERwEQOwCIgAchQAklR81vEt9rM4+Pb4oJBQceqYuO7qYoaaMVB2ANz4AHoAY9Rn4Ak4SIpFP4OJQIMp6m4Gi5OPgOsEwpFogpWtdsJ4fGICb4YVAJlN1FcfpVVutNts8dg3hQxAZ1F8qLZsHZLjcgbAIJYKClKFQAORIlFozqYqIxMi44bE8mPSmvOw07BrDZbQbDfHvfSfYzDHl2aZ84G9IiNEY+MEcZFETiKaUY8JynGM4njSaqiizDDoPAa2k6hn65ms40xLk-S5AA)) will remove the error. | Help Wanted,Possible Improvement | low | Critical |
2,545,887,244 | pytorch | [ONNX] Handle autocast HOP | Currently the exporter does not handle higher order ops. Autocasts are expressed as HOPs:
```python
# Inside the ExportedProgram
...
class submod_1(torch.nn.Module):
def forward(self, expand_1: "f32[1, 64, 1]", _to_copy_1: "f32[1, 1, 512]"):
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:209 in forward, code: freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
_to_copy_2: "f32[1, 64, 1]" = torch.ops.aten._to_copy.default(expand_1, dtype = torch.float32); expand_1 = None
_to_copy_3: "f32[1, 1, 512]" = torch.ops.aten._to_copy.default(_to_copy_1, dtype = torch.float32); _to_copy_1 = None
matmul: "f32[1, 64, 512]" = torch.ops.aten.matmul.default(_to_copy_2, _to_copy_3); _to_copy_2 = _to_copy_3 = None
transpose: "f32[1, 512, 64]" = torch.ops.aten.transpose.int(matmul, 1, 2); matmul = None
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:210 in forward, code: emb = torch.cat((freqs, freqs), dim=-1)
cat: "f32[1, 512, 128]" = torch.ops.aten.cat.default([transpose, transpose], -1); transpose = None
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:211 in forward, code: cos = emb.cos()
cos: "f32[1, 512, 128]" = torch.ops.aten.cos.default(cat)
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:212 in forward, code: sin = emb.sin()
sin: "f32[1, 512, 128]" = torch.ops.aten.sin.default(cat); cat = None
return (cos, sin)
# No stacktrace found for following nodes
submod_3 = self.submod_1
wrap_with_autocast = torch.ops.higher_order.wrap_with_autocast('cuda', None, False, None, submod_3, expand_1, _to_copy_1); submod_3 = expand_1 = _to_copy_1 = None
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:211 in forward, code: cos = emb.cos()
cos: "f32[1, 512, 128]" = wrap_with_autocast[0]
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:212 in forward, code: sin = emb.sin()
sin: "f32[1, 512, 128]" = wrap_with_autocast[1]; wrap_with_autocast = None
# File: /home/xadupre/.local/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:215 in forward, code: cos = cos * self.attention_scaling
mul_1: "f32[1, 512, 128]" = torch.ops.aten.mul.Tensor(cos, 1.0); cos = None
...
``` | module: onnx,triaged | low | Minor |
2,545,896,932 | vscode | SCM Graph - Title and Icons of SCM Graph actions | Testing #229364
The view works as expected. Possible improvements:
- Although I don't have a concrete proposal, "Go to Current History Item" feels like an awkward name to me since "Current" and "history" seem to exclude each other. "Go to checked out commit" seems more appropriate.
- When looking for the "Go to Current History Item" in the title bar I intuitively went to the "Fetch from all remotes" action. The bull's eye didn't convey any meaning to me.

| ux,scm,under-discussion | low | Major |
2,545,921,204 | go | x/mobile: gomobile bind with go1.23 no exported names in the package | ### Go version
go version go1.23.0.darwin-amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/Users/2me2/Library/Caches/go-build'
GOENV='/Users/2me2/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/2me2/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/2me2/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/2me2/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.0.darwin-amd64'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/2me2/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.0.darwin-amd64/pkg/tool/darwin_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/2me2/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/2me2/src/play/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/q_/16fwpggs3_z0skpfkj4_cnfm0000gn/T/go-build3768234229=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
go install golang.org/x/mobile/cmd/gomobile@latest
gomobile init
go get golang.org/x/mobile/cmd/gomobile@latest
gomobile bind -target ios -x ./mobile
inside ./mobile is a file mobile.go that has the following:
```
package mobile
func Blah() int {
return 1
}
```
go.mod specifically specifies go1.23.0
### What did you see happen?
```
~/src/play > gomobile bind -target ios -x ./mobile
GOMOBILE=/Users/2me2/go/pkg/gomobile
WORK=/var/folders/q_/16fwpggs3_z0skpfkj4_cnfm0000gn/T/gomobile-work-2833558045
rm -r -f "Mobile.xcframework"
GOOS=ios CGO_ENABLED=1 $GOPATH/bin/gobind -lang=go,objc -outdir=$WORK/ios -tags=ios 2me2test/mobile
GOOS=ios CGO_ENABLED=1 $GOPATH/bin/gobind -lang=go,objc -outdir=$WORK/iossimulator -tags=ios 2me2test/mobile
rm -r -f "$WORK"
gomobile: /Users/2me2/go/bin/gobind -lang=go,objc -outdir=/var/folders/q_/16fwpggs3_z0skpfkj4_cnfm0000gn/T/gomobile-work-2833558045/iossimulator -tags=ios 2me2test/mobile failed: exit status 1
no exported names in the package "2me2test/mobile"
no exported names in the package "2me2test/mobile"
no exported names in the package "2me2test/mobile"
no exported names in the package "2me2test/mobile"
```
### What did you expect to see?
Expected to see everything work fine without the no exported names in the package errors. It works just fine if in go.mod I change the version to 1.22.x. Only seeing this issue in 1.23.x | NeedsInvestigation,mobile | low | Critical |
2,545,923,044 | flutter | Fix SliverReorderableList Flickering For Async List Updates | ### Steps to reproduce
When using state management tools like riverpod or bloc, `ReorderableListView` may flicker on reorder if the list is from such a provider. This is because the `onReorder` callback may call a provider to update the list, but the list's new state has not been updated since the provider is async. Another scenario is that if `onReorder` call's an async function.
Examples outlined here: https://github.com/rrousselGit/riverpod/discussions/642 and https://github.com/felangel/bloc/issues/4013
**Possible Solutions**
- Add a caching the list option, as outline in the solution in the link
- Add a final rebuild time delay or number of frames options.
- `onReorder` should be changed to return a `FutureOr` and the final rebuild should be called once it completes.
| c: new feature,framework,f: material design,c: proposal,P2,team-framework,triaged-framework | low | Major |
2,545,930,654 | vscode | IME editor position is non-deterministic when triggered from outside the viewport | Testing #229383
Not sure if EditContext should handle this better than the traditional textarea, but it seems the position of IME editor is still non-deterministic if the focus is on a view line that's outside of the viewport
- Put cursor in a line
- Scroll it outside of the viewport
- Type
- The view line is scrolled into the viewport, but the IME is positioned randomly it seems
**Textarea**
<img width="446" alt="image" src="https://github.com/user-attachments/assets/415104c5-76d9-4b40-8781-da8ce3350635">
**EditContext**
<img width="454" alt="image" src="https://github.com/user-attachments/assets/7140b87a-1d07-4008-9fa8-2f32f3677c48">
<img width="465" alt="image" src="https://github.com/user-attachments/assets/c3a48598-9b8b-4be2-a38f-0f31604d5121">
<img width="503" alt="image" src="https://github.com/user-attachments/assets/6c553da9-e790-47f8-b5d7-7f639ea3820a">
| bug,upstream-issue-linked,editor-edit-context | low | Minor |
2,545,992,667 | godot | Row of black pixels appearing and disappearing at bottom of editor window / fonts deforming. | ### Tested versions
Reproducible from 4.0 to 4.4
Also present in 3.6
### System information
Godot v4.4.dev2 - Windows 10.0.22631 / AtlasOS - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4050 Laptop GPU (NVIDIA; 31.0.15.4680) - AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 Threads)
### Issue description
There is a row of black (#000000) pixels appearing and disappearing at the bottom of the editor window. (Either that, or the editor is shrinking vertically by 1 pixel.) This seems to cause the whole editor to scale incorrectly, resulting in deformed fonts in various places. It does this in a project selection window too.
Monitoring the update spinner suggests it pops in and out at the same time the editor updates.
The row won't appear when the editor is consistently updating quickly, for example if Editor Settings > Editor > Update Continuously is set to true or I force the editor to update quickly the usual way, i.e. spamming a letter D in the code editor or moving the mouse rapidly between two containers of the GUI.
After a short time with no updates, the row of pixels appears and the editor stays in its deformed state.
It's **unaffected** by:
- where the mouse is on the screen or whatever the mouse is currently hovering over
- what I'm actually typing
- whether the taskbar is set to hide or not
- window size, maximized or not
- fullscreen toggle
- the asset drawer being at the bottom or not
I would have assumed it's an issue with my new PC, but there are no other apps demonstrating similar behaviour, and I have a lot of 'em.
Some examples of fonts deforming (First one's a little hard to see, it's the top pixel row of "TileMapDual.gd":


I tried to get screen capture video of the issue but it doesn't show in video for some reason. No idea what to do with that information either. Something to do with the video capture resolution? My screen is 1920x1080 and it's supposedly capturing at 1920x1080, so...
I'm not a particularly techie person. Any ideas for specs to include or further things to rule out would be appreciated.
### Steps to reproduce
Be Godot'ing. I have no idea.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:gui | low | Minor |
2,545,994,551 | godot | Godot 4.3 Code Editor Freezes When Typing Special Characters on Ubuntu | ### Tested versions
Reproducible in: Godot 4.3.stable
### System information
OS: Ubuntu 22.04 CPU: Intel Core i7-3770 GPU: NVIDIA GTX 1060 3GB (nvidia-driver-535) Rendering backend: Vulkan (Forward+)
### Issue description
When typing special characters (such as "É") in the Godot 4.3 integrated script editor on Ubuntu, the code editor freezes. The rest of the engine remains functional, but the editor becomes unresponsive until it's restarted. This issue persists even when the engine language is set to English.
Expected behavior: The code editor should not freeze when typing special characters.
### Steps to reproduce
Open Godot 4.3 on Ubuntu.
Create new project
Open the script editor.
Type any special character like "É" or "Á".
Observe that the editor freezes, while the rest of the engine remains functional.
### Minimal reproduction project (MRP)
N/A - The issue occurs in a new project without any specific project files. | bug,topic:editor,topic:input | low | Major |
2,546,019,440 | vscode | repeat action for Jupyter walkthrough | Testing #229460
In the Jupyter walkthough, as pictured below, there are two "actions" you can take from the first submenu. Both the "Jupyter extension" in blue text and the button "search jupyter extension" do the same action which feels redundant.

| polish,under-discussion,notebook | low | Major |
2,546,031,019 | go | cmd/compile: possible PGO miscompilation in biogo-igor benchmark | The biogo-igor benchmark part of Sweet has been failing regularly at tip in the end-to-end test. It specifically fails when being run under PGO.
Broken out from #56958. | NeedsInvestigation,compiler/runtime | low | Critical |
2,546,050,292 | rust | Unnecessary loop unrolling to handle tail when tail length has a smaller known size | In the following code, the first `while` loop should process 8 bytes at a time and exit early if an invalid byte is found.
The remaining bytes should be known to be `bytes.len() % 8`, but the auto-vectorization unrolls to test again for 32 bytes and 8 bytes at a time
https://rust.godbolt.org/z/z8hnb9PGY
```rust
pub const fn is_ascii(bytes: &[u8]) -> bool {
const N1: usize = 8;
let mut i = 0;
while i + N1 <= bytes.len() {
let chunk_end = i + N1;
let mut count = 0;
while i < chunk_end {
count += (bytes[i] <= 127) as u8;
i += 1;
}
if count != N1 as u8 {
return false;
}
}
// Process the remaining `bytes.len() % N` bytes.
let mut is_ascii = true;
while i < bytes.len() {
is_ascii &= bytes[i] <= 127;
i += 1;
}
is_ascii
}
``` | A-LLVM,I-slow,C-optimization | low | Minor |
2,546,073,207 | kubernetes | ConfigMap subpath mount could have transient "no such file or directory: unknown" error if it's patched before container startup | ### What happened?
If configMap is patched between pod startup (volume mount) and container startup, there's a chance that the container startup will fail with error "no such file or directory: unknown". The error is transient and is recovered on container restart. However, if the container can't be restarted or the pod is deleted on container startup failure, it will appear as a final error.
### What did you expect to happen?
Ideally we want to avoid this mount error from happening.
### How can we reproduce it (as minimally and precisely as possible)?
ok I think I got a reliable repro:
first, save the following to `subpath.yaml`:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod-{{number}}
spec:
volumes:
- configMap:
name: extra-cfg
name: extra-cfg
containers:
- name: test
image: ubuntu:latest
command: ["bash", "-c"]
args:
- |
echo "test test-pod-{{number}} running"
sleep 25
resources:
requests:
cpu: 10m
volumeMounts:
- name: extra-cfg
mountPath: /etc/extra.ini
subPath: extra.ini
---
apiVersion: v1
data:
extra.ini: |
somedata-{{number}}
kind: ConfigMap
metadata:
name: extra-cfg
```
then, run the following script:
```sh
for i in {1..20}
do
scp subpath.yaml tmp.yaml
sed -i -e "s@{{number}}@$i@g" "tmp.yaml"
k apply -f tmp.yaml
done
```
this will reliably reproduce the issue. However, from what I can see, this issue is transient and the pods will recover on the next container restart.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
```
</details>
### Cloud provider
<details>
reproducible on GKE but not KIND, assuming that's because it's harder to hit this race condition on kind
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,priority/important-longterm,triage/accepted | low | Critical |
2,546,087,466 | pytorch | The doc of `linalg.norm()` should say there is `input` parameter instead of `A` parameter for `linalg.norm()` | ### 📚 The doc issue
[The doc](https://pytorch.org/docs/stable/generated/torch.linalg.norm.html) of `linalg.norm()` says there is `A` parameter for `linalg.norm()` as shown below:
> torch.linalg.norm(A, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → [Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)
> A ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – tensor of shape (*, n) or (*, m, n) where * is zero or more batch dimensions
But `A` parameter doesn't work as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([-2., -1., 0., 1., 2., 3.])
# ↓
linalg.norm(A=my_tensor) # Error
```
> TypeError: linalg_norm() missing 1 required positional arguments: "input"
While `input` parameter works as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([-2., -1., 0., 1., 2., 3.])
# ↓↓↓↓↓
linalg.norm(input=my_tensor)
# tensor(4.3589)
```
### Suggest a potential alternative/fix
So, [the doc](https://pytorch.org/docs/stable/generated/torch.linalg.norm.html) of `linalg.norm()` should say there is `input` parameter for `linalg.norm()` as shown below:
> torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → [Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)
> input ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – tensor of shape (*, n) or (*, m, n) where * is zero or more batch dimensions
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @albanD @svekars @brycebortree @sekyondaMeta | module: docs,triaged,module: linear algebra,actionable,module: python frontend | low | Critical |
2,546,106,391 | vscode | Attachment pill renders incorrectly in chat history | Testing #229342
I'd expect to see the file icon and no fil prefix here

Attaching via the 📎 yields:

| bug,panel-chat | low | Minor |
2,546,114,730 | flutter | RenderParagraph reports inconsistent heights when empty vs non-empty | Given a `RenderParagraph`, which can be achieved with a standard `Text` widget, the reported height of a line of text has a 1px difference depending on whether the text is empty, or non-empty.
This is a problem for text editing situations in which the surrounding UI is based on the intrinsic height of the text.
Consider a chat UI. At the bottom of the screen there's a text editor to write your chat message. The chat message UI makes itself as tall as the message that the user is writing (e.g., expands as the user inserts more lines). This message UI has some hint text that says "Send a message". The user places the caret in the editor, types a character, and the whole editor shrinks by 1px.
While this shift might seem minor, it's very noticeable, and we've got customer complaints tied to this.
Possibly related issued: https://github.com/flutter/flutter/issues/107196
Reproduction code:
```dart
import 'package:flutter/material.dart';
void main() {
print("Running app");
runApp(MyApp());
}
class MyApp extends StatelessWidget {
MyApp({super.key});
final _emptyTextKey = GlobalKey();
final _nonEmptyTextKey = GlobalKey();
void _checkHeights() {
final emptyTextBox =
_emptyTextKey.currentContext!.findRenderObject() as RenderBox;
final nonEmptyTextBox =
_nonEmptyTextKey.currentContext!.findRenderObject() as RenderBox;
print(
"Empty text height: ${emptyTextBox.size.height}, Non-empty text height: ${nonEmptyTextBox.size.height}");
}
@override
Widget build(BuildContext context) {
print("Building the widget tree");
WidgetsBinding.instance.addPostFrameCallback((_) {
_checkHeights();
});
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Column(children: [
Text(key: _emptyTextKey, ""),
Text(key: _nonEmptyTextKey, "F"),
]),
),
);
}
}
```
Reproduction output from Mac desktop (also verified on iOS simulators):
```
Empty text height: 21.0, Non-empty text height: 20.0
``` | framework,a: typography,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Critical |
2,546,131,721 | godot | OS.execute_with_pipe() will not process escaped arguments, different from the way OS.execute() handles them. | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Linux Kubuntu 24.04 KDE Plasma Version: 5.27.11 KDE Frameworks Version: 5.115.0 Qt Version: 5.15.13 Kernel Version: 6.8.0-39-generic (64-bit) Graphics Platform: X11 Processors: 16 × 12th Gen Intel® Core™ i7-1260P Memory: 15.3 GiB of RAM
### Issue description
These two functions should process arguments the same way:
```
# Will process escaped arguments and normal argumets
# E.g.
# args=["-lsa", "\"/home/user1/New\\ Folder/\""]
# bin ="ls"
code = OS.execute(bin, args, value, true, false )
```
```
# Will NOT process escaped arguments, but only normal arguments.
# E.g.
# args=["-lsa", "\"/home/user1/New\\ Folder/\""]
# bin ="ls"
info = OS.execute_with_pipe(bin, args)
```
### Steps to reproduce
See attached code:
[exec_pipe_demo.zip](https://github.com/user-attachments/files/17119289/exec_pipe_demo.zip)
```gdscript
extends Control
var pipe
var stderr
var pid
var thread
var info
var bin
var args
# Show the difference in processing escaped arguments of OS.execute() and
# OS.execute_with_pipe()
func _ready():
# Fails with pipe execute, but works with blocking execute
args=["-lsa", "\"/home/user1/New\\ Folder/\""]
# Works with either execute,
#args=["-lsa", "/home/user1/New Folder/"]
bin = "ls"
if true: # Run blocking execute test
print("OS.execute test:")
var value=[]
var code: int
code = OS.execute(bin, args, value, true, false )
print("OS Code=",code)
for v in value:
print(v)
if true: # Run pipe execute test
print("OS.execute_with_pipe:")
info = OS.execute_with_pipe(bin, args)
pipe = info["stdio"]
stderr=info["stderr"]
pid=info["pid"]
thread = Thread.new()
thread.start(_thread_func)
thread.wait_to_finish()
get_window().close_requested.connect(clean_thread)
pass
func _thread_func():
# read stdin and add to TextEdit.
var line = ""
var pipe_err
while pipe.is_open():
pipe_err=pipe.get_error()
if pipe_err == OK:
line=pipe.get_line()
print(line)
pass
else:
line=stderr.get_line()
if line!="":
print(line)
else:
break
clean_thread()
func clean_thread():
pipe.close()
OS.kill(pid)
```
### Minimal reproduction project (MRP)
[exec_pipe_demo.zip](https://github.com/user-attachments/files/17119289/exec_pipe_demo.zip)
1. Unzip the attached file above including the scene.
2. Add it to an empty project .
3. Create a folder with a blank in the name. E.g. `/home/user1/New Folder`
4. Adjust the 2nd argument in `args=["-lsa", "\"/home/user1/New\\ Folder/\""] `accordingly to the new folder path. Leaving escaped string.
5. Run the scene and check result in Output
You will notice for the escaped path, `OS,execute() `runs the arguments, while `OS.execute_with_pipe()` fails. This shows the arguments are processed differently between the two, which in certain instances if the arguments are escaped due to spaces or quotes, then code will fail with the pipe version.
Another example is this
```
args=["-c2", "\"yahoo\\.com\""]
bin="ping"
```
The above will fail when using pipe but not with blocking execute. I understand may not make sense to escape yahoo, but I am pointing out that escaping strings should work the same way with both `OS.execute() and OS.execute_with_pipe()`
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.3.274 - Forward+ - Using Device #0: Intel - Intel(R) Graphics (ADL GT2)
```
OS.execute test:
OS Code=0
PING yahoo.com (98.137.11.164) 56(84) bytes of data.
64 bytes from media-router-fp73.prod.media.vip.gq1.yahoo.com (98.137.11.164): icmp_seq=1 ttl=50 time=100 ms
64 bytes from media-router-fp73.prod.media.vip.gq1.yahoo.com (98.137.11.164): icmp_seq=2 ttl=50 time=55.9 ms
--- yahoo.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 55.939/78.157/100.375/22.218 ms
OS.execute_with_pipe:
ping: "yahoo\.com": Name or service not known
```
This works with both:
args=["-c2", "yahoo.com"]
bin="ping"
| bug,topic:core | low | Critical |
2,546,146,685 | go | x/tools/gopls: Hover: panic in lookup{ObjectByName,DocLinkSymbol} | ```
#!stacks
("bug.Reportf" || "runtime.sigpanic") &&
("lookupObjectByName" || "lookupDocLinkSymbol")
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
```go
func lookupObjectByName(pkg *cache.Package, pgf *parsego.File, name string) types.Object {
scope := pkg.Types().Scope()
fileScope := pkg.TypesInfo().Scopes[pgf.File]
pkgName, suffix, _ := strings.Cut(name, ".")
obj, ok := fileScope.Lookup(pkgName).(*types.PkgName) // <--- panic
```
Variant after the function was renamed:
```go
func lookupDocLinkSymbol(pkg *cache.Package, pgf *parsego.File, name string) types.Object {
scope := pkg.Types().Scope()
prefix, suffix, _ := strings.Cut(name, ".")
// Try treating the prefix as a package name,
// allowing for non-renaming and renaming imports.
fileScope := pkg.TypesInfo().Scopes[pgf.File]
if fileScope == nil {
// This is theoretically possible if pgf is a GoFile but not a
// CompiledGoFile. However, we do not know how to produce such a package
// without using an external GoPackagesDriver.
// See if this is the source of golang/go#70635
if slices.Contains(pkg.CompiledGoFiles(), pgf) {
bug.Reportf("missing file scope for compiled file") // <--- reached
} else {
bug.Reportf("missing file scope for non-compiled file")
}
```
This stack `l9BGAQ` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-09-18.json):
- `crash/crash`
- [`runtime.gopanic:+69`](https://cs.opensource.google/go/go/+/go1.23.0:src/runtime/panic.go;l=804)
- `runtime.panicmem:=262`
- [`runtime.sigpanic:+19`](https://cs.opensource.google/go/go/+/go1.23.0:src/runtime/signal_unix.go;l=900)
- [`go/types.(*Scope).Lookup:+10`](https://cs.opensource.google/go/go/+/go1.23.0:src/go/types/scope.go;l=83)
- [`golang.org/x/tools/gopls/internal/golang.lookupObjectByName:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/golang/comment.go;l=148)
- [`golang.org/x/tools/gopls/internal/golang.parseDocLink:+58`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/golang/comment.go;l=129)
- [`golang.org/x/tools/gopls/internal/golang.hover:+43`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/golang/hover.go;l=177)
- [`golang.org/x/tools/gopls/internal/golang.Hover:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/golang/hover.go;l=109)
- [`golang.org/x/tools/gopls/internal/server.(*server).Hover:+30`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/server/hover.go;l=51)
- [`golang.org/x/tools/gopls/internal/protocol.serverDispatch:+335`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/protocol/tsserver.go;l=503)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.ServerHandler.func3:+5`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/protocol/protocol.go;l=160)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.handshaker.func4:+52`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:gopls/internal/lsprpc/lsprpc.go;l=509)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.MustReplyHandler.func1:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:internal/jsonrpc2/handler.go;l=35)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.AsyncHandler.func2.2:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.16.1:internal/jsonrpc2/handler.go;l=103)
- `runtime.goexit:+0`
```
golang.org/x/tools/gopls@v0.16.1 go1.23.0 linux/amd64 vscode (1)
```
Dups: z79pXw p3-DZQ | NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,546,151,107 | vscode | ASCII characters should not be part of the editing undo/redo stack | Testing #229383
* Use Chinese IME on macOS
* Put cursor in the middle of a document
* Type `nihao`, press space to accept
* With `textarea`, undo once, all Chinese characters are removed
* :bug: With EditContext, undo once, Chinese characters are replaced with `nihao`, and you can continue to undo each ASCII
**Textarea**
https://github.com/user-attachments/assets/05abddb9-a83f-45d1-903e-9998ce79b992
**EditContext**
https://github.com/user-attachments/assets/7405f435-a523-49d0-b0f9-c0f03ee1c6a1
| bug,editor-input-IME,undo-redo | low | Critical |
2,546,167,924 | TypeScript | Predicates break arbitrarily when intersected with some types. (Probably when the predicate is generic). | ### 🔎 Search Terms
predicate intersection
### 🕗 Version & Regression Information
latest version v5.6.2
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.2#code/JYOwLgpgTgZghgYwgAgCpQK5gBYE8AKUEAJsAnJMgN4CwAUMo8gDyoB8AFAPYBcaAlHy7JgAZzTIAZNQC+Abnoz69UJFiIUAUQC2ABzC4AkuGjwk1RcroHdKAMJcQosJgRguUAMrAA5iDg+GETIALwWDEwgEADuyBwAdIlwUD6ifHAguADaALqCyBm4CnTyVsQQCAA2ySgIjs4FUFB8HM5QoD7IAD7IGCDlMKAk-LnF9OVVNch1TmDIukSk5JAAjHzoWHiEJGQUEMUzDYOVaiQroY1Q8cdqHAs7yxAr-HLIAPRvyNEeANbibR1clZxhVqsFDnN7ks9gAmdaYHAERa7SjSHT6IwmdRIA71OY3aAkGEXZJXAlQO7Ix4wl7vT7ELgQUQgADkc2+UB+wLoEzBtTx8ypewAzPDNkiHnspLJcbNkOSSMKSU1rsATtBKZLIMLaR8vr9-i5ATlubyphDBVqIAAWMWI7bQ1HIByzVzuLy+fyBIiyo5q07Ea3Ksn+jVQlE23X0xnMtn6zn0IA
### 💻 Code
```ts
interface TruthyPredicate {
<T>(o: T): o is T & {};
}
interface EmptyInterface {
}
type ConstructorSignagure = {
new (...args: any[]): any;
};
declare const arr: (string | undefined)[];
declare const predicate1: TruthyPredicate;
const filtered1 = arr.filter(predicate1); // works string[]
declare const predicate2: TruthyPredicate & EmptyInterface;
const filtered2 = arr.filter(predicate2); // doesn't work
declare const predicate3: TruthyPredicate & {} /* Empty object type instead of interface */;
const filtered3 = arr.filter(predicate3); // works string[]
declare const predicate4: TruthyPredicate & ConstructorSignagure;
const filtered4 = arr.filter(predicate4); // doesn't work
```
### 🙁 Actual behavior
Predicate stops working
### 🙂 Expected behavior
Predicate works
### Additional information about the issue
I tried other scenarios
- [truthy variation](https://www.typescriptlang.org/play/?ts=5.6.2#code/JYOwLgpgTgZghgYwgAgCpQK5gBYE8AKUEAJsAnJMgN4CwAUMo8gDyoB8AFAPYBcayAH2QYQxCDFAkAlHy7JgAZzQBuegF969UJFiIUAUQC2ABzC4AkuGjwk1dZrpnjKAMJcQCsJgRguUAMrAAOYgcEEYRMgAvHYMTCAQAO7IHAB06XBQQQp8cCC4ANoAujLIebiqdGqV9GIIADaZKAjunmVQUHwcnlCgQYLCouKSxFLFNXR1jZEtHmDIxkSk5JAAjHzoWHiEJGQUEJWzbRL1OiSr0e1QqSc6HIu7KxCrUsrIAPTvyIl+ANZKPT6xQctQgDSayCO8wey32ACYNpgcAQlntKAAyZBGUwWKy6JCHVrzW7QEhwy6Za4kqD3VFPOGvD5fYhcCAKEAAcnmPygvxBkzB02aRIWdP2AGZEVsUY99shMVRqvQochqSRxRSOjdgKdoLTZZBxYzPt8-gCvECivyphCVTC0RAACxS5E7WEY5BuObeXwBYKhcJEQlzVU6s7ER2aqlhvX2p6O43M1nsrmm3n0IA)
- [generic without undefined](https://www.typescriptlang.org/play/?ts=5.6.2#code/JYOwLgpgTgZghgYwgAgMpiqA5gBShAE2ATkmQG8BYAKGTuQB4AVAPgAoB7ALmSYEoeHZMADOyAKIAPDIjDMANMhEZsLANw0AvjRqhIsRCnEBbAA5gAngElw0eEgpad1S6ZQBhDiGVQArgjAOKFRgLBA4LF98ZABeR1p6EAgAd2Q2ADpMuCgsER44EAsAbQBdAWQCiw1qTWqaAggEABtslAQvZQqoKB42H2xkAB9kEF9jACNoPlK66gbm1uR27zBkU3wiEkgARh50TBBcDeJSCGrlzphgJv1CbdiuqHSrm+g2dcITnb41ZAB6P7IZJBADWYn6h1KznqjRa0Quqw+m1OACY9ipDnhPlsUAAyCRmSw2fT2M40BHIF63Agoh7ZJ5Ut5Ir4QFE-f6AggcCAiEAAclWwKgIOhc1hiwpzJxAGZ0QcjtjTsh8eRauSOqtGRtpXTus9rvp3scZeyAUDQeCMVgodRnPM4W0NWtjacACxy7BY5FkfGeFZ+AJBEJhCJRMnUClawiu3UMg1Ml2QV2mznc3kC83CmhAA)
- [non-generic, always works, but won't be able to use the parameter type](https://www.typescriptlang.org/play/?ts=5.6.2#code/JYOwLgpgTgZghgYwgAgMpiqA5gBShAE2ATkmQG8BYAKGTuQAoB7ALmQGcNtkAfZEAK4BbAEbQAlGybJg7DlxBYA3DQC+NGqEixEKAKJCADmACeASXDR4SCmo3VThlAGEmITlAEIwTKKmBYIHBYAvjIALy2tPQgEADujAB0yXBQWOxscCAmANoAupLIWSYq1KqlNAQQCAA2qSgIbpxFUFBsDB7cfIKiEvkV1FW19ciN7mDIhvhEJJAAjGzomIp4hMSkEKVjzTDANdqEcxEtUIm7+9AMU2uzEHPiSsgA9E-Icb4A1nKdivn2ldU6mFthNrjMNgAmRYKXDTdZkABkyAMxnMlh0SC2TQm5wOBAhx1Sp1xlzB8IgEIez1e7ygX3kyywf2o9iGQIa2MmcNuAGZoYzVuDERRyjQQcgSdMeYTWmc9tortyNjyqS83p9vjDmazASNxWTbgAWfnYQXk5BI1zjTzeXz+QLBUKbMWcyWEQ0y4ny0lKyCG1U0jUM7DMoA)
- [this is why it needs to support generics](https://www.typescriptlang.org/play/?ts=5.6.2#code/JYOwLgpgTgZghgYwgAgMpiqA5gBShAE2ATkmQG8BYAKGTuQB4AVAPgAoB7ALmSYEoeHZMADOyAKIAPDIjDMANMhEZsLANw0AvjRqhIsRCnEBbAA5gAngElw0eEgpad1S6ZQBhDiGVQArgjAOKFRgLBA4LF98ZABeR1p6EAgAd2Q2ADpMuCgsER44EAsAbQBdAWQCiw1qTWqaAggEABtslAQvZQqoKB42AHIvCD7kAB9kPrBkjmGxkF9jACNoPlK66gbm1uR27zBkU3wiEkgARh50TBBcQ+JSCGqdzphgJv1CE9iuqHTn1+g2A6EW6nPhqZAAenByCmUAA1mJ+oMZuNJtMViVnPVGi1oo89oCjncAEznFRXPBA44oABkEjMlhs+ns9xoeOQvzeBCJn2y3w5-wJwIgRNBEKhBA4EBEIAm0KCsMx62xWzZgqpAGZSZdrpS7shaeRaqyOnt+Yd1Tzuj8XvoATcNaLIXK4QiBklkRMpn10YqNji2ib9va7gAWLXYCmEsi0zy7PwBIIhMIRKIs6hss2EEOWvk2gXByAhx3iyXS2UwhXUIA) | Bug,Help Wanted | low | Minor |
2,546,180,556 | pytorch | The doc of `linalg.vector_norm()` should not say `ord` parameter accepts the `str` value `fro` or `nuc` | ### 📚 The doc issue
[The doc](https://pytorch.org/docs/stable/generated/torch.linalg.vector_norm.html) of `linalg.vector_norm()` says `ord` parameter accepts the `str` value `fro` or `nuc` as shown below:
> ord ([int](https://docs.python.org/3/library/functions.html#int), [float](https://docs.python.org/3/library/functions.html#float), inf, -inf, 'fro', 'nuc', optional) – order of norm. Default: 2
But `ord` parameter with `fro` or `nuc` doesn't work as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([[-2., -1., 0.],
[1., 2., 3.]])
linalg.vector_norm(input=my_tensor, ord='fro') # Error
# ↑ ↑ ↑ ↑ ↑
linalg.vector_norm(input=my_tensor, ord='nuc') # Error
# ↑ ↑ ↑ ↑ ↑
```
> TypeError: linalg_vector_norm(): argument 'ord' must be Number, not str
> TypeError: linalg_vector_norm(): argument 'ord' must be Number, not str
### Suggest a potential alternative/fix
[The doc](https://pytorch.org/docs/stable/generated/torch.linalg.vector_norm.html) of `linalg.vector_norm()` should not say `ord` parameter accepts the `str` value `fro` or `nuc` as shown below:
> ord ([int](https://docs.python.org/3/library/functions.html#int), [float](https://docs.python.org/3/library/functions.html#float), inf, -inf, optional) – order of norm. Default: 2 | triaged,module: norms and normalization,topic: docs | low | Critical |
2,546,209,617 | rust | ICE: "error performing operation: fully_perform" on type inference for trait object with HRTB over GAT | Another ICE when using a HRTB in a trait object similar to #130524, but in a slightly different context and with a different error.
### Code
```Rust
trait Transform {
type Output<'a>;
}
trait Propagate<O> {}
trait AddChild<C> {}
pub struct Node<T>(T);
impl<T> AddChild<Box<dyn for<'b> Propagate<T::Output<'b>>>> for Node<T> where
T: Transform
{
}
fn make_graph_root() {
add_children(Node(Dummy));
}
fn add_children<T, C>(_node: T)
where
T: AddChild<C>,
{
}
struct Dummy;
impl Transform for Dummy {
type Output<'a> = ();
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
Also reproduced on nightly via the playground
### Error output
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: error performing operation: fully_perform
--> repro/src/lib.rs:14:5
|
14 | add_children(Node(Dummy));
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/compiler/rustc_trait_selection/src/traits/query/type_op/custom.rs:85:25 - disabled backtrace
--> repro/src/lib.rs:14:5
|
14 | add_children(Node(Dummy));
| ^^^^^^^^^^^^^^^^^^^^^^^^^
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: error performing operation: fully_perform
--> repro/src/lib.rs:14:5
|
14 | add_children(Node(Dummy));
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/compiler/rustc_trait_selection/src/traits/query/type_op/custom.rs:85:25
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::span_delayed_bug::<rustc_span::span_encoding::Span, alloc::string::String>
4: <rustc_borrowck::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle::ty::ParamEnvAnd<rustc_middle::traits::query::type_op::ProvePredicate>>
5: <rustc_borrowck::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
6: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_const_operand
7: <rustc_borrowck::type_check::TypeVerifier as rustc_middle::mir::visit::Visitor>::visit_body
8: rustc_borrowck::type_check::type_check
9: rustc_borrowck::nll::compute_regions
10: rustc_borrowck::do_mir_borrowck
11: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
12: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
13: rustc_query_impl::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
14: rustc_interface::passes::analysis
15: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
16: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
17: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
18: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
19: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
20: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
21: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
22: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
23: std::sys::pal::unix::thread::Thread::new::thread_start
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/pal/unix/thread.rs:108:17
24: <unknown>
25: <unknown>
--> repro/src/lib.rs:14:5
|
14 | add_children(Node(Dummy));
| ^^^^^^^^^^^^^^^^^^^^^^^^^
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
```
</p>
</details>
| A-lifetimes,I-ICE,A-trait-system,T-compiler,C-bug,S-bug-has-test,fixed-by-next-solver,A-trait-objects,A-GATs,A-higher-ranked | low | Critical |
2,546,227,714 | pytorch | [RFC] Offload collectives to NVSwitch when possible | ### 🚀 The feature, motivation and pitch
NVLink SHARP is an engine in NVSwitch that can perform collectives (e.g. all-reduce).
This feature reduces GPU SM consumption by as much as 6x (24 to 4), while boosting performance by 2x (its mechanism is like one-shot all-reduce, hence the 2x theoretical speedup).
To leverage this feature, please see this doc:
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/bufferreg.html#nvlink-sharp-buffer-registration
It requires the input / output buffers be allocated through a NCCL API -- `ncclMemAlloc`. The mem alloc is now enabled by a stack of PRs allowing CUDACachingAllocator to use different mem alloc backends. See:
original RFC: https://github.com/pytorch/pytorch/issues/124807 and
PR impl: https://github.com/pytorch/pytorch/pull/133603.
## Target use
A first target of the feature can be DDP (in cases where we manage the gradient bucket internally).
Second target would be TP. (For example, "async-tp" -- but we'd need to know whether "async-tp" does all-reduce or not). Otherwise, if "general" TP is in Inductor's hand, we can ask Inductor to allocate specific memory for the result of matmul.
Cc: @syed-ahmed
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ptrblck @msaroufim | oncall: distributed,module: cuda | low | Major |
2,546,229,799 | PowerToys | Windows File Explorer compatable file name fix and paste | ### Description of the new feature / enhancement
paste with Windows File Explorer file naming compatability.
(or some form of regex)
### Scenario when this would be used?
naming files/folder/ in Windows File Explorer. pasting url of saved article. pasting "date-time stamps" formats. etc...
### Supporting information
Since Powertoys is a Windows only app. File Explorer doesn't correct invalid characters, but instead deletes it THEN just errors out. The existing "paste as markdown" feature is a form of this feature, but it would be very helpful to have a windows file explore file naming version. | Needs-Triage,Needs-Team-Response,Product-Advanced Paste | low | Critical |
2,546,244,145 | pytorch | Setting a `complex` tensor to `linalg.vector_norm()` returns a `float` tensor | ### 🐛 Describe the bug
Setting an `int` tensor to [linalg.vector_norm()](https://pytorch.org/docs/stable/generated/torch.linalg.vector_norm.html) got the error message as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([-2, -1, 0, 1, 2, 3])
linalg.vector_norm(input=my_tensor) # Error
```
> RuntimeError: linalg.vector_norm: Expected a floating point or complex tensor as input. Got Long
So, I set a `complex` tensor to `linalg.vector_norm()` but a `float` tensor is returned instead of a `complex` tensor even though I set `dtype=torch.complex64` to `linalg.vector_norm()` as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j, 3.+0.j])
linalg.vector_norm(input=my_tensor)
# tensor(4.3589)
# ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
linalg.vector_norm(input=my_tensor, dtype=torch.complex64)
# tensor(4.3589)
```
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @amjames | triaged,module: complex,module: norms and normalization | low | Critical |
2,546,267,591 | TypeScript | Specialized error message when too new a lib is provided | ### 🔍 Search Terms
lib es2024 es2025 esnext
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
If a `lib` value is provided that's an ES* version newer than what the current version of TypeScript supports, it'd be nice to have a specialized error message saying so. Today we just get the general _"`Argument for '--lib' option must be: ...`"_ error with >=59 unique values to read through. It would be nice to explicitly indicate the lib is an ES* that's not yet supported. Maybe...
```plaintext
Error: npm error tsconfig.json(3,13): error TS####: Argument 'ES2027' for '--lib' option is a year not yet supported by TypeScript as of version 5.6.7.
```
### 📃 Motivating Example
In practice, if a single package in a monorepo falls out of date, then this error is likely to occur. Having that specialized error can help speed up debugging & make it clear what's going wrong.
Example in the wild: https://github.com/eslint/js/pull/631 -> https://github.com/eslint/js/actions/runs/11017802958/job/30596589705?pr=631 -> https://github.com/eslint/js/pull/632
### 💻 Use Cases
1. What do you want to use this for? - Purely for debugging incorrect TS setups.
2. What shortcomings exist with current approaches? - The error is accurate, but not precise.
3. What workarounds are you using in the meantime? - Remembering to check the ES year & TypeScript version every time this comes up.
| Suggestion,Awaiting More Feedback | low | Critical |
2,546,319,964 | rust | [adt_const_params] consider to avoid using specialization when implement traits for Foo<const B: Bar> | Not sure this should be a lacking in RFC or a bug in compiler, sorry if I issued to the wrong place.
The title is a little confusing, here is an exmple for illustating:
```rs
#![feature(adt_const_params)]
#[derive(PartialEq, Eq, ConstParamTy)]
enum Number {
Int,
Float,
}
struct PropWrapper<const N: Number> {}
trait Prop {
type Ty;
}
impl Prop for PropWrapper<{Number::Int }> {
type Ty = usize;
}
impl Prop for PropWrapper<{Number::Float }> {
type Ty = f32;
}
struct Foo<const N: Number> {
n: <PropWrapper<N> as Prop>::Ty,
}
```
As you can see, I listed all the possible trait implementations `Prop` for types of `PropWrapper` (in this case, two possible types), but the compiler still complains:
```
error[E0277]: the trait bound `PropWrapper<N>: Prop` is not satisfied
--> src/main.rs:29:8
|
29 | n: <PropWrapper<N> as Prop>::Ty,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Prop` is not implemented for `PropWrapper<N>`
|
= help: the following other types implement trait `Prop`:
PropWrapper<Number::Float>
PropWrapper<Number::Int>
```
Currently my workaround is to use specialization:
```rs
impl<const N: Number> Prop for PropWrapper<N> {
default type Ty = ();
}
```
This is unnecessary and the default case will never hit.
Is it reasonable for the compiler to check if all possible implementations are all listed for the case of using `adt_const_params`?
I use the nigthly build rustc 1.77.0-nightly (f688dd684 2024-01-04) | F-adt_const_params,T-types | low | Critical |
2,546,329,053 | neovim | Output more than 50 lines in `vim.schedule_wrap` when triggered `VimLeavePre`, stuck. | ### Problem
Running `:wq` with following Lua code:
```lua
-- string with 51 lines
local result = ""
for _ = 0, 50 do
result = result .. "\n"
end
-- output `result` just generated when triggered VimLeavePre
-- cannot reproduce when triggered VimLeave
vim.api.nvim_create_autocmd("VimLeavePre", {
callback = function()
vim.system({ "ls" }, vim.schedule_wrap(function(obj)
vim.notify(result)
-- `vim.print` and `print` also get stuck
end))
end,
})
```
Neovim will get stuck (can't do anything, expect `pkill nvim`).
Thanks for all the amazing work in Neovim.
### Steps to reproduce
Add above Lua codes to `~/.config/nvim/init.lua` and run `:q`.
May not be reproduced with `nvim --clean -u minimal.lua`.
### Expected behavior
Neovim should exit after running `:q`.
### Nvim version (nvim -v)
0.11.0-dev-828+g052875b9dc
### Vim (not Nvim) behaves the same?
Vim doesn't support Nvim API.
### Operating system/version
Arch Linux rolling
### Terminal name/version
Alacritty 0.13.2
### $TERM environment variable
alacritty
### Installation
Arch User Repository (AUR) | bug,event-loop,system,messages | low | Minor |
2,546,344,815 | godot | Unable to load GDExtension in Web export due to LinkError | ### Tested versions
4.3.stable.official.77dcf97d8
### System information
Windows 11 - Vulkan (Forward+)
### Issue description
```
Uncaught (in promise) LinkError: WebAssembly.instantiate():
Import #69 "env" "memory": mismatch in shared state of memory, declared = 1, imported = 0
```
### Steps to reproduce
1. Open MPR
2. Click `Remote Debug` > `Run in Browser` in the top-right corner.
3. Open the browser console, and you will see the error.
The exported project also has the same error.
### Minimal reproduction project (MRP)
[MPR.zip](https://github.com/user-attachments/files/17120643/MPR.zip)
| bug,platform:web,topic:gdextension | low | Critical |
2,546,367,908 | pytorch | [ONNX] Dynamic shapes: support `torch.sym_not` | `torch.sym_not` does not belong to the `torch.ops.[...]` namespace so we need to have a convention for registering the `sym` ops.
This is a part of the sym support in ONNX. We additionally need to support proper dispatching for float/int/bool types.
For now, users can do
```python
from onnxscript import opset18 as op
import torch
def sym_not(x):
return op.Not(x)
onnx_program = torch.onnx.export(..., dynamo=True, custom_translation_table={torch.sym_not: sym_not})
```
cc @xadupre @gramalingam @titaiwangms @shubhambhokare1 | module: onnx,triaged | low | Minor |
2,546,388,115 | rust | Tracking Issue for `const_mut_cursor` | Feature gate: `#![feature(const_mut_cursor)]`
This is a tracking issue for marking the `get_mut` and `set_position` methods in `std::io::Cursor` as const.
### Public API
```rust
// std::io
impl<T> Cursor<T> {
pub const fn get_mut(&mut self) -> &mut T;
pub const fn set_position(&mut self, pos: u64);
}
```
### Steps / History
- [x] Implementation: #130800
- [ ] Final comment period (FCP)
- [ ] Stabilization PR
### Unresolved Questions
- Can we also "constify" `into_inner`?
| T-libs-api,final-comment-period,C-tracking-issue,disposition-merge | low | Minor |
2,546,391,032 | vscode | Focus is lost when it enters an image preview | Testing #229263
1. Tab to an attached image
2. Press space to preview it
3. Tab to try to get out of it
4. 🐛 focus is reset to the start of the window | bug,panel-chat | low | Minor |
2,546,392,449 | vscode | Delayed tooltip on tab focus is weird | Testing #229263
@meganrogge may know more but the behavior of showing the image preview on delay when focus is inside the image attachment is strange, I've not seen that pattern anywhere else. I think it should only be shown on explicit space action | bug,panel-chat | low | Major |
2,546,394,613 | vscode | Attached image hover flashes when tabbing in the attachment | Testing #229263
1. Tab to an image attachment
2. Wait so that the image preview appears
3. Tab again to be on the "close" action of the image
4. 🐛 the hover flashes
Is there a need for the "close" action to be a separate focus element at all? I think the only possible action for an attachment is to remove it, so maybe it doesn't need to be focused and the default `Enter` action could remove it... | bug,panel-chat | low | Minor |
2,546,397,835 | tauri | [feat] [v2] Scope for root filesystem | ### Describe the problem
There isn't a clean way to access files and directories in the root filesystem.
Example:
SD cards are mounted under the `/Volumes` directory in OSX and there is currently no clean way to access the mounted directories.
### Describe the solution you'd like
Something like a scope for `$ROOT` for interacting with system-level files would be useful.
Context: https://v2.tauri.app/plugin/file-system/#scopes
### Alternatives considered
Running the app with sudo permissions works, but it is difficult to package.
### Additional context
_No response_ | type: feature request | low | Minor |
2,546,481,762 | rust | Tracking Issue for `file_buffered` | Feature gate: `#![feature(file_buffered)]`
This is a tracking issue for `File` constructors that return files wrapped with a buffer.
In addition to the light convenience, these are intended to raise visibility that buffering is something you should consider when opening a file, since unbuffered I/O is a common performance footgun to Rust newcomers.
### Public API
```rust
// std::fs
impl File {
pub fn open_buffered<P: AsRef<Path>>(path: P) -> io::Result<io::BufReader<File>>;
pub fn create_buffered<P: AsRef<Path>>(path: P) -> io::Result<io::BufWriter<File>>;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] API Change Proposal (ACP): rust-lang/libs-team#446
- [x] Implementation: #130803
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue,A-filesystem | low | Major |
2,546,482,997 | transformers | Saving model with shared tensors fails on cpu but succeeds on gpu | ### System Info
platform: linux: `ubuntu 22.04`
python version: `3.10.12`
transformers version: `4.44.2`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python3
# example.py
import torch
import pytest
from transformers import AutoModelForCausalLM
@pytest.mark.parametrize(
"torch_dtype,tie_word_embeddings,device_map",
[
(torch.float16, True, "cpu" ), # passes
(torch.float16, False, "cpu" ), # passes
(torch.float32, True, "cpu" ), # passes
(torch.float32, False, "cpu" ), # fails
(torch.float32, False, "cuda:0"), # passes
],
)
def test_model_save(torch_dtype, tie_word_embeddings, device_map, tmp_path):
model = AutoModelForCausalLM.from_pretrained(
"Xenova/llama2.c-stories15M",
torch_dtype=torch_dtype,
tie_word_embeddings=tie_word_embeddings,
device_map=device_map,
)
model.save_pretrained(tmp_path, safe_serialization=True)
# test that the model saved correctly
reloaded = AutoModelForCausalLM.from_pretrained(
tmp_path,
torch_dtype="auto",
device_map=device_map
)
model_dict = model.state_dict()
reloaded_dict = reloaded.state_dict()
assert model_dict.keys() == reloaded_dict.keys()
for key in model_dict:
assert torch.equal(model_dict[key], reloaded_dict[key])
assert model_dict[key].device == reloaded_dict[key].device
```
```bash
python3 -m pytest example.py
```
```
RuntimeError:
Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'lm_head.weight', 'model.embed_tokens.weight'}].
A potential way to correctly save your model is to use `save_model`.
More information at https://huggingface.co/docs/safetensors/torch_shared_tensors
```
### Expected behavior
I expect `save_pretrained` to have the same behavior, regardless of model data type, and regardless of device | bug | low | Critical |
2,546,494,634 | flutter | [go_router] Preserve pathParameters when navigating to a route under the same parent route with mutual parameters | ### Use case
I develop applications where the user can use multiple accounts. In order to make it as robust as possible, I am utilizing GoRouter for keeping the state of which account is active.
Needing to fetch the current route's state, picking out the required path parameters, and then passing them to e.g. `push()` is a bit tiresome.
My routes usually look like this:
- /
- /:account
- a
- b
- c
In this example, I'd like to push to `c`, while on `a`, but keep the `account` path parameter.
### Proposal
Make GoRouter's methods automatically carry over mutual path parameters. Maybe make it an optional parameter flag, so it doesn't break other applications. | c: new feature,package,c: proposal,P2,p: go_router,team-go_router,triaged-go_router | low | Minor |
2,546,495,760 | rust | Associated type bounds are not equivalent to de-sugared `where` clauses for supertraits and nested associated types | I am not sure if this is a bug or a documentation issue.
The Rust Reference [states](https://doc.rust-lang.org/reference/trait-bounds.html):
> - In trait declarations as supertraits: `trait Circle: Shape {}` is equivalent to `trait Circle where Self: Shape {}`
> - In trait declarations as bounds on associated types: `trait A { type B: Copy; }` is equivalent to `trait A where Self::B: Copy { type B; }`
This matches the [associated type bounds RFC](https://github.com/rust-lang/rfcs/blob/master/text/2289-associated-type-bounds.md#reference-level-explanation):
> - The surface syntax `T: Trait<AssociatedType: Bounds>` should desugar to a pair of bounds: `T: Trait` and `<T as Trait>::AssociatedType: Bounds`.
> - The new syntax does not introduce any new semantics.
The [stabilization PR](https://github.com/rust-lang/rust/pull/122055#issue-2170532454) says *almost* the same thing, but with a subtle difference:
> - […] `where T: Trait<Assoc: Bound>` is equivalent to `where T: Trait, <T as Trait>::Assoc: Bound`.
> - Supertraits - Similar to above, `trait CopyIterator: Iterator<Item: Copy> {}`. This is **almost** equivalent to breaking up the bound into two (or more) `where` clauses; however, **the bound on the associated item is implied whenever the trait is used**. See [Should associated type bounds on supertraits be implied? #112573](https://github.com/rust-lang/rust/issues/112573)/[Make associated type bounds in supertrait position implied #112629](https://github.com/rust-lang/rust/pull/112629).
> - Associated type item bounds - This allows constraining the nested rigid projections that are associated with a trait's associated types. e.g. `trait Trait { type Assoc: Trait2<Assoc2: Copy>; }`.
(Emphasis mine.)
However, the PR linked in that second bullet, [Make associated type bounds in supertrait position implied](https://github.com/rust-lang/rust/pull/112629), implies that this `where` clause implies the same bounds as the `B<Assoc: C>` syntax:
> `trait A: B<Assoc: C> {}` should be able to imply both `Self: B` and `<Self as B>::Assoc: C`.
For normal associated types, this is definitely the case, but for bounds on associated types of supertraits, and bounds on associated types of associated types, these two forms are demonstrably *not* equivalent:
This compiles:
```rust
// A trait which has bounds on a nested associated type,
// using shorthand associated type bounds syntax.
// Associated type bounds ARE implied elsewhere.
pub trait NestedSugar
{
// Sufficient.
type Output: Iterator<Item: Clone>;
fn make() -> Self::Output;
}
pub fn nested_sugar<T>() -> <T::Output as Iterator>::Item
where
T: NestedSugar,
// `No <T::Output as Iterator>::Item: Clone` required.
{
T::make().next().unwrap().clone()
}
```
And this compiles:
```rust
// Supertrait with bounds on the associated type of the subtrait,
// using shorthand associated type bounds syntax.
// Supertrait associated type bounds ARE implied elsewhere.
pub trait SuperSugar
where
Self: Iterator<Item: PartialEq>,
{
fn super_next_sugar(&self) -> <Self as Iterator>::Item;
}
pub fn take_sugar<I>(iter: I) -> bool
where
I: SuperSugar,
// No `<I as Iterator>::Item: PartialEq` required.
{
let first = iter.super_next_sugar();
let second = iter.super_next_sugar();
first == second
}
```
But this does not compile:
```rust
// A trait which has bounds on a nested associated type,
// using a where clause.
// The associated type bounds are NOT implied elsewhere.
pub trait NestedWhere
where
Self::Output: Iterator,
// Not sufficient.
<Self::Output as Iterator>::Item: Clone,
{
type Output;
// GAT-ish syntax is also not sufficient:
//type Output: Iterator where <Self::Output as Iterator>::Item: Clone;
fn make() -> Self::Output;
}
pub fn nested_where<T>() -> <T::Output as Iterator>::Item
where
T: NestedWhere,
// Required. Does not compile without this line:
//<T::Output as Iterator>::Item: Clone,
// error[E0277]: the trait bound `<<T as NestedWhere>::Output as Iterator>::Item: Clone` is not satisfied
{
T::make().next().unwrap().clone()
}
```
Nor does this:
```rust
// Supertrait with bounds on the associated type of the subtrait,
// using a where clause.
// Associated type bounds are NOT implied elsewhere.
pub trait SuperWhere
where
Self: Iterator,
// Not sufficient.
<Self as Iterator>::Item: PartialEq,
{
fn super_next_where(&self) -> <Self as Iterator>::Item;
}
pub fn take_where<I>(iter: I) -> bool
where
I: SuperWhere,
// Required. Does not compile without this line:
//<I as Iterator>::Item: PartialEq,
// error[E0277]: can't compare `<I as Iterator>::Item` with `<I as Iterator>::Item`
// help: the trait `PartialEq` is not implemented for `<I as Iterator>::Item`
{
let first = iter.super_next_where();
let second = iter.super_next_where();
first == second
}
```
---
## Playground
Playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=faec69dd74703073da0a6183cceb9afd
## See also
- [rust-lang/rust#58231: Trait bounds on associated type causes confusing compilation error](https://github.com/rust-lang/rust/issues/58231)
- [rust-lang/rust#57905: Compiler unable to apply trait bound](https://github.com/rust-lang/rust/issues/57905)
- [rust-lang/rust#20671: where clauses are only elaborated for supertraits, and not other things](https://github.com/rust-lang/rust/issues/20671)
- [rust-lang/rust#103387: `where` on `trait` declaration should be provided when the trait is an input bound, but is instead a requirement](https://github.com/rust-lang/rust/issues/103387)
All of those issues talk about type bounds not being implied in one case or another, but as far as I can tell there is no issue about the *discrepancy* of implied bounds between these two syntaxes. And again I am not sure if this is a bug or a documentation (and possibly diagnostics) issue. | A-trait-system,T-types | low | Critical |
2,546,525,315 | godot | EditorPlugin scene_changed signal not fired when opening a scene when the current scene is empty | ### Tested versions
- Tested in 4.3.stable
- Tested in 4.4.dev (c3e16cda0)
### System information
Godot v4.3.stable (77dcf97d8) - Linux Mint 21.3 (Virginia) - X11 - Vulkan (Forward+) - dedicated AMD Unknown (RADV GFX1102) - AMD Ryzen 5 2600X Six-Core Processor (12 Threads)
### Issue description
When the current edited scene is an empty scene, opening any scene file will replace the empty scene without firing the `EditorPlugin.scene_changed` signal.
### Steps to reproduce
1. Make an `EditorPlugin`.
2. Connect to the `scene_changed` signal.
3. Have an empty scene open (can be the only scene, or just create a new empty scene).
4. Open any scene file (either through the filesystem dock or `EditorInterface.open_scene_from_path`).
5. `scene_changed` signal will not be fired with the newly open scene.
### Minimal reproduction project (MRP)
[scene-changed-mrp.zip](https://github.com/user-attachments/files/17121244/scene-changed-mrp.zip)
1. Open project.
2. Open output panel and observe the `<Object#null>` showing that `EditorPlugin.scene_changed` was called for the empty scene (correct, expected behvaiour).
3. Open `test_scene.tscn`.
4. Observe no new messages in the output panel. | discussion,topic:editor,topic:plugin,needs testing | low | Minor |
2,546,528,628 | pytorch | Bool convolutions (and other integral types) | ### 🚀 The feature, motivation and pitch
I'm working on an AI for Dr. Mario and would like to look for certain patterns on the game board. This operation is quite naturally represented as a convolution using the Bool operations (* = and, + = or). In detail:
Suppose you have a game board that's a two dimensional grid with some tokens in each grid space. If you categorize the tokens, you could represent a collection of boards with a one-hot encoding with dimensions {n, c, wb, hb}, where n is the number of boards in the collection, c is the number of categories, wb is the width of the board, and hb its height.
I want to find all the places on the board that have a certain local pattern. A pattern is a rectangle smaller than the board, and a set of allowed categories for each position in the rectangle. You could represent a collection of patterns by a tensor of dimensions {m, c, wp, hp}, where m is the number of patterns in the collection, c is the number of categories, wp is the width of the pattern rectangle, and hp is the height of the pattern rectangle, and where a 1 indicates that the given category is *not* allowed in the given pattern+position. N.B. this one is not one-hot -- there may be any number of categories that are forbidden in a given pattern+position, including none or all of them.
If you use these representations, then calling `conv2d(boards, patterns).logical_not()` with the Bool operations would give you an {m, wb-wp+1, hb-wh+1} tensor with a 1 in exactly the locations that match the relevant pattern... if it actually worked.
### Alternatives
Current workaround plan is to do the convolution with floats and check for equality with zero. It just seems wasteful in both space and time to use floating point operations instead of logical ones.
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged,module: boolean tensor | low | Minor |
2,546,545,780 | flutter | SubmenuButton overlay not always shown on focused menu buttons on iOS | ### Steps to reproduce
1. Create a `MenuBar` containing `SubmenuButton` submenus.
2. Override the `MenuButtonTheme` so that the `overlayColor` is a color that contrasts with the theme surface color when the button is focused, and override the `foregroundColor` to be a color that contrasts with that color also when the button is focused.
### Expected results
Since the foreground and overlay colors contrast with one another, and the overlay is shown on the menu when the menu is focused, this should ensure that the text will always be visible.
### Actual results
In fact, the overlay color is not always shown on the menu when the menu is focused. Specifically, on iOS (and only iOS), when the menu is focused by tapping on it, the overlay is hidden when the touch is removed, even though the menu remains in a focused state.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter MenuAnchor Focus Color Reproduction'),
);
}
}
class MyHomePage extends StatelessWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
Widget build(BuildContext context) {
return Theme(
data: Theme.of(context).copyWith(
menuButtonTheme: MenuButtonThemeData(
style: ButtonStyle(
overlayColor: WidgetStateProperty.resolveWith(
(Set<WidgetState> states) {
return states.contains(WidgetState.focused)
? Theme.of(context).colorScheme.primary
: Theme.of(context).colorScheme.surface;
},
),
foregroundColor: WidgetStateProperty.resolveWith(
(Set<WidgetState> states) {
return states.contains(WidgetState.focused)
? Theme.of(context).colorScheme.onPrimary
: Theme.of(context).colorScheme.onSurface;
},
),
),
),
),
child: Scaffold(
appBar: AppBar(
title: Text(title, style: TextStyle(fontFamily: 'ProductSans')),
bottom: PreferredSize(
preferredSize: Size.fromHeight(48.0),
child: MenuBar(
children: [
SubmenuButton(
menuChildren: [
MenuItemButton(
child: Text('New'),
onPressed: () => print('New'),
),
],
child: Text('File'),
),
SubmenuButton(
menuChildren: [
MenuItemButton(
child: Text('Undo'),
onPressed: () => print('Undo'),
),
],
child: Text('Edit'),
),
],
),
),
),
body: Align(
alignment: Alignment.topCenter,
child: Padding(
padding: const EdgeInsets.symmetric(
horizontal: 96.0,
vertical: 24.0,
),
child: Text(
'On iOS, tap on one of the menus above, and notice that the focus '
'color of the menu background and text are not in sync. It will '
'be correct on any other platform, or if you press tab to focus '
'the menu.',
),
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
N/A
### Flutter Doctor output
N/A | platform-ios,framework,P2,customer: quake (g3),team-design,triaged-design | low | Critical |
2,546,547,800 | TypeScript | Remove Map<any, any> constructor overload | ### ⚙ Compilation target
ES2017
### ⚙ Library
lib.es2015.iterable.d.ts
### Missing / Incorrect Definition
This isn't exactly the kind of issue this form (Library issue) seems to be made for, but it looked like the closest match.
The `Map` constructor has an explicit overload to make `new Map()` produce a `Map<any, any>`, rather than safer option of allowing normal type inference to occur. I think the only change that needs to happen is removing the overload. This was already brought up in https://github.com/microsoft/TypeScript/issues/52552, but that was closed because it did not provide a clear usecase, and the focus was on inconsistency between `Map` and `Set`.
For me, the usecase is about `Map` alone - removing a source of silent and infectious `any`s. `noImplicitAny` is a recommended setting for good reason, but is undermined by the presence of `any`s in library types. In fact, I am creating this issue after fixing a bug in my own code that was hidden by this typing.
A fair counterargument is that it may break existing code. My guess - total speculation - is that the override was added in the past when inference was not as effective, and it is no longer necessary in most cases. Where the empty map is meant to conform to a contextual type, it works fine. Toying around with it myself, I find two main cases where it breaks:
1. `Map<any, any>` was literally the intent. I'd strongly argue these cases should be explicit.
2. The `Map` is constructed without contextual types, but it is meant to conform to them later. As a result, the code in-between those places is unsafe, as in the example below. Though unsafe, this might be a common pattern in practice that the change would break. (Ofc a solution is to explicitly type the `Map` construction.)
```ts
function getWordCounts(text: string): Map<string, number> {
const m = new Map();
for (const w of text.split(' ')) {
m.set(w, (m.get(w) ?? 0) + 1);
}
return m;
}
```
### Sample Code
```TypeScript
// The problem is that this at-at-glance reasonable function is returning `any`.
function numRudeWords(input: string | null) {
// ^? function numRudeWords(input: string | null): any
const wordCounts = input ? getWordCounts(input) : new Map();
// ^? const wordCounts: Map<any, any>
return (wordCounts.get('meanie') ?? 0) + (wordCounts.get('dangnabit') ?? 0);
}
function getWordCounts(text: string): Map<string, number> {
const m = new Map<string, number>();
for (const w of text.split(' ')) {
m.set(w, (m.get(w) ?? 0) + 1);
}
return m;
}
```
### Documentation Link
The MDN doc link is here, though it's not super relevant to this particular question: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/Map | Suggestion,Awaiting More Feedback | low | Critical |
2,546,562,234 | rust | Argument with type `impl FnMut` combined with recursion and closures causes inifinite recursion in compiler | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
pub enum ObjectOrVec {
Object(u8),
V(Vec<ObjectOrVec>),
}
pub fn gen(gen_random_num: fn() -> u8) -> ObjectOrVec {
use ObjectOrVec::*;
fn gen_recursively(
gen_random_num: fn() -> u8,
recurses: usize,
mut visitor: impl FnMut(ObjectOrVec),
) {
match gen_random_num() {
1 if recurses > 0 => {
let mut v = Vec::new();
gen_recursively(gen_random_num, recurses - 1, |x| v.push(x));
visitor(V(v));
}
n => visitor(Object(gen_random_num())),
}
}
let mut v = Vec::new();
gen_recursively(gen_random_num, 3, |x| v.push(x));
V(v)
}
```
I expected to see this happen: Code compiles and generates 2 versions of function `gen`.
Instead, this happened: compiler errors by reaching recursion limit. Increasing recursion limit causes stack overflow.
Error message:
```
error: reached the recursion limit while instantiating `gen_recursively::<{closure@src/lib.rs:16:63: 16:66}>`
--> src/lib.rs:16:17
|
16 | gen_recursively(gen_random_num, recurses - 1, |x| v.push(x));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: `gen_recursively` defined here
--> src/lib.rs:8:5
|
8 | / fn gen_recursively(
9 | | gen_random_num: fn() -> u8,
10 | | recurses: usize,
11 | | mut visitor: impl FnMut(ObjectOrVec),
12 | | ) {
| |_____^
```
Link to playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=571ea952b43ae93438a8f4bf9c2b0ec3
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-pc-windows-msvc
release: 1.81.0
LLVM version: 18.1.7
```
| T-compiler,C-bug | low | Critical |
2,546,576,419 | godot | Linux: Blender process not stopping on Godot-Exit | ### Tested versions
Godot v4.3.stable
### System information
Godot v4.3.stable - KDE neon 6.1 22.04 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 (nvidia; 535.183.01) - AMD Ryzen 5 5500 (12 Threads)
### Issue description
After exiting Godot my Blender process stays alive.
Below a sample output of `ps -ef | grep blender` after exiting Godot.
```
user 840681 1998 0 00:31 ? 00:00:01 /snap/blender/5374/blender --background --python-expr import bpy, sys, threading from xmlrpc.server import SimpleXMLRPCServer req = threading.Condition() res = threading.Condition() info = None export_err = None def xmlrpc_server(): server = SimpleXMLRPCServer(('127.0.0.1', 6011)) server.register_function(export_gltf) server.serve_forever() def export_gltf(opts): with req: global info info = ('export_gltf', opts) req.notify() with res: res.wait() if export_err: raise export_err # Important to return a value to prevent the error 'cannot marshal None unless allow_none is enabled'. return 'BLENDER_GODOT_EXPORT_SUCCESSFUL' if bpy.app.version < (3, 0, 0): print('Blender 3.0 or higher is required.', file=sys.stderr) threading.Thread(target=xmlrpc_server).start() while True: with req: while info is None: req.wait() method, opts = info if method == 'export_gltf': try: export_err = None bpy.ops.wm.open_mainfile(filepath=opts['path']) if opts['unpack_all']: bpy.ops.file.unpack_all(method='USE_LOCAL') bpy.ops.export_scene.gltf(**opts['gltf_options']) except Exception as e: export_err = e info = None with res: res.notify()
```
### Steps to reproduce
* Open Godot Project, which uses Blender imports
* Close Godot and see the process list, i.e. `ps -ef | grep blender`
### Minimal reproduction project (MRP)
- | bug,platform:linuxbsd,topic:thirdparty | low | Critical |
2,546,581,279 | deno | deno run panics with BYONM if the main module specifies a dist tag and the package is in `node_modules` | This was originally spotted because I tried to have npm version requirements default to the `latest` tag in `deno_semver`. While testing out the change in deno I ran into this panic. Outside of that setting, it's a little convoluted to encounter.
Repro:
Get `cowsay` into the node_modules dir
```
deno run --node-modules-dir=auto -A npm:cowsay
```
Then try to run it with BYONM, and a dist tag
```
❯ deno run --node-modules-dir=manual -A npm:cowsay@latest
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: macos aarch64
Version: 2.0.0-rc.5
Args: ["deno", "run", "--node-modules-dir=manual", "-A", "npm:cowsay@latest"]
thread 'main' panicked at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/deno_semver-0.5.13/src/lib.rs:284:32:
programming error: cannot use matches with a tag: latest
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
``` | bug | low | Critical |
2,546,587,618 | vscode | SCM Graph - Unsure how to tell when graph is locked to a repository | Testing #229375
I noticed when I selected a specific repository for the SCM graph that the repository name next to the book icon changed, but that the branch icon still had the text "auto" next to it. Is there some indicator for when I have the SCM graph set to a specific repository versus when I have it set to auto? | ux,scm,under-discussion | low | Minor |
2,546,587,775 | pytorch | torch.export support for the latest transformers `DynamicCache` as input | Hugging Face `transformers` is moving to use the `DynamicCache` class as part of the model inputs for the kv cache values. Currently `torch.export` will complain that it is not a tensor. So all models that take `DynamicCache` as input will not be exportable. This affects torch.onnx and other exporters dependent on torch.export alike.
```
transformers==4.44.2
```
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, DynamicCache
# Get position_ids from attention_mask
def get_position_ids(attention_mask: torch.Tensor, use_past_kv: bool):
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if use_past_kv:
# Shape: (batch_size, 1)
position_ids = position_ids[:, -1].unsqueeze(-1)
# Shape: (batch_size, sequence_length)
return position_ids
# Create empty past_key_values
def get_past_kv_inputs(config: AutoConfig, batch_size: int):
num_heads = config.num_key_value_heads
head_size = (
config.head_dim
if hasattr(config, "head_dim")
else config.hidden_size // config.num_attention_heads
)
past_kv = [
(
torch.rand(batch_size, num_heads, 0, head_size, dtype=torch.float32),
torch.rand(batch_size, num_heads, 0, head_size, dtype=torch.float32),
)
for _ in range(config.num_hidden_layers)
]
return past_kv
def get_merged_model_dynamic_axes(input_names: List[str], output_names: List[str]):
dynamic_axes = {}
for name in input_names + output_names:
if name in {"input_ids", "position_ids"}:
# shape is (batch_size, sequence_length)
dynamic_axes[name] = {0: "batch_size", 1: "sequence_length"}
elif name == "attention_mask":
# shape is (batch_size, past_sequence_length + sequence_length) = (batch_size, total_sequence_length)
# for prompt generation, past_sequence_length = 0
# for token generation, sequence_length = 1
dynamic_axes[name] = {0: "batch_size", 1: "total_sequence_length"}
elif "past" in name:
# shape is (batch_size, num_heads, past_sequence_length, head_size)
dynamic_axes[name] = {0: "batch_size", 2: "past_sequence_length"}
elif name == "logits":
# shape is (batch_size, sequence_length, vocab_size)
dynamic_axes[name] = {0: "batch_size", 1: "sequence_length"}
elif "present" in name:
# shape is (batch_size, num_heads, past_sequence_length + sequence_length, head_size) = (batch_size, num_heads, total_sequence_length, head_size)
# for prompt generation, past_sequence_length = 0
# for token generation, sequence_length = 1
dynamic_axes[name] = {0: "batch_size", 2: "total_sequence_length"}
else:
raise ValueError("Unknown input or output name found")
return dynamic_axes
model_name = "google/gemma-2-2b"
cache_dir = "./cache_dir"
config = AutoConfig.from_pretrained(model_name, cache_dir=cache_dir)
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
model = AutoModelForCausalLM.from_pretrained(model_name, cache_dir=cache_dir)
batch_size, sequence_length = 2, 8
inputs = {
"input_ids": torch.randint(
low=1, high=config.vocab_size, size=(batch_size, sequence_length), dtype=torch.int64
),
"attention_mask": torch.ones((batch_size, sequence_length), dtype=torch.int64),
"position_ids": get_position_ids(
torch.ones((batch_size, sequence_length), dtype=torch.int64), use_past_kv=False
),
"past_key_values": DynamicCache.from_legacy_cache(get_past_kv_inputs(config, batch_size)),
}
program = torch.export.export(
model,
args=tuple(inputs.values()),
strict=False
)
```
Will raise an error
```pytb
Traceback (most recent call last):
File "/workspace/ONNXConverter/optimum_export.py", line 83, in <module>
program = torch.export.export(
^^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/__init__.py", line 366, in export
return _export(
^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py", line 1643, in _non_strict_export
) = make_fake_inputs(
^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_export/non_strict_utils.py", line 193, in make_fake_inputs
fake_args, fake_kwargs = tree_map_with_path(
^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/utils/_pytree.py", line 1608, in tree_map_with_path
return treespec.unflatten(func(*xs) for xs in zip(*all_keypath_leaves))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/utils/_pytree.py", line 803, in unflatten
leaves = list(leaves)
^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/utils/_pytree.py", line 1608, in <genexpr>
return treespec.unflatten(func(*xs) for xs in zip(*all_keypath_leaves))
^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_export/non_strict_utils.py", line 194, in <lambda>
lambda kp, val: fakify(fake_mode, kp, val, t_constraints, sources),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_export/non_strict_utils.py", line 95, in fakify
raise ValueError(f"Unsupported input type {type(t)}")
ValueError: Unsupported input type <class 'transformers.cache_utils.DynamicCache'>
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @kunal-vaishnavi @xadupre @gramalingam | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,546,594,046 | transformers | llama `tie_word_embeddings` ignored on cpu and with auto dtype only | ### System Info
platform: linux: `ubuntu 22.04`
python version: `3.10.12`
transformers version: `4.44.2`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python3
import torch
import pytest
from transformers import AutoModelForCausalLM
@pytest.mark.parametrize(
"torch_dtype,tie_word_embeddings,device_map",
[
(torch.float16, False, "cpu" ), # passes
(torch.float32, False, "cpu" ), # fails
(torch.float32, False, "cuda:0"), # passes
(torch.float16, True, "cpu" ), # passes
(torch.float32, True, "cpu" ), # passes
(torch.float32, True, "cuda:0"), # passes
],
)
def test_model_shared(torch_dtype, tie_word_embeddings, device_map, tmp_path):
# load model
model = AutoModelForCausalLM.from_pretrained(
"Xenova/llama2.c-stories15M",
torch_dtype=torch_dtype,
tie_word_embeddings=tie_word_embeddings,
device_map=device_map
)
# modify lm head
with torch.no_grad():
model.lm_head.weight += 1
# check that embed_tokens is not modified
if tie_word_embeddings:
assert torch.equal(model.lm_head.weight, model.model.embed_tokens.weight)
else:
assert not torch.equal(model.lm_head.weight, model.model.embed_tokens.weight)
```
### Expected behavior
I expect tied tensors should not be tied if `tie_word_embeddings=False`. Instead, the tensors are tied. Seems to be the root cause of #33688 | Core: Modeling,bug | low | Minor |
2,546,596,037 | deno | deno add: wildcard version requirement doesn't match pre-release versions | Split out from #25813,
Repro:
```
❯ deno add 'npm:storybook-solidjs-vite@*'
error: npm:storybook-solidjs-vite was not found.
```
Interestingly, most of the other subcommands (e.g. `deno run`) behave as expected
```
❯ deno run 'npm:storybook-solidjs-vite@*'
error: Failed resolving binary export. '/repro/node_modules/.deno/storybook-solidjs-vite@1.0.0-beta.2/node_modules/storybook-solidjs-vite/package.json' did not have a bin property
```
(there is no bin entrypoint, so this error is correct, the important bit is that it resolved to `1.0.0-beta.2`) | bug | low | Critical |
2,546,603,324 | pytorch | support FakeTensor input for torch.compile | ### 🚀 The feature, motivation and pitch
In AutoFSDP, I want to estimate flops and runtimes without running model on real GPUs (so I can explore best module wrapping policy without running large scale jobs). I was trying to use FakeTensor input for torch.compile. I got following error from `backend=inductor`: **fake mode from fake tensor input 0 allocated at...**.
Wonder the workload to make it work? There are maybe 2 fake modes, one from my user code, and one from PT2 stack
If it helps triage, `backend=eager` works
repro
```
import torch
import torch.nn as nn
from torch._subclasses.fake_tensor import FakeTensorMode
with FakeTensorMode():
model = nn.Linear(4, 4, device="cuda")
inp = torch.rand(4, 4, device="cuda")
loss = torch.compile(model)(inp).sum()
loss.backward()
```
error stack
```
File "/data/users/weif/pytorch/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/data/users/weif/pytorch/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/weif/pytorch/torch/_dynamo/symbolic_convert.py", line 2987, in RETURN_VALUE
self._return(inst)
File "/data/users/weif/pytorch/torch/_dynamo/symbolic_convert.py", line 2972, in _return
self.output.compile_subgraph(
File "/data/users/weif/pytorch/torch/_dynamo/output_graph.py", line 1117, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/data/users/weif/pytorch/torch/_dynamo/output_graph.py", line 1360, in compile_and_call_fx_graph
backend_fake_mode = torch._subclasses.FakeTensorMode(
File "/data/users/weif/pytorch/torch/_subclasses/fake_tensor.py", line 1174, in __init__
self._stack_trace = traceback.extract_stack()
fake mode from fake tensor input 0 allocated at:
File "/data/users/weif/pytorch/test_fake_tensor_compile.py", line 5, in <module>
with FakeTensorMode():
File "/data/users/weif/pytorch/torch/_subclasses/fake_tensor.py", line 1174, in __init__
self._stack_trace = traceback.extract_stack()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 @bdhirsh | triaged,oncall: pt2,module: fakeTensor,module: dynamo,module: pt2-dispatcher | low | Critical |
2,546,603,684 | PowerToys | [Peek] Implement pre-loading of the Next/Previous image for ImagePreviewer | ### Description of the new feature / enhancement
In the current Peek implementation, each time the user navigates to a new image, an entirely separate instance of the ImagePreviewer class is created, and then the image is loaded and displayed. This is fine for peeking single images or small files, but when navigating through a folder of large images, the constant loading is distracting. It would be advantageous to pre-load the Next or Previous image, so switching could be instantaneous when/if the user navigated in that direction.
Please note: this is separate from the #34515 issue, but it is likely that both will involve not destroying prior previewer instances between navigation events.
### Scenario when this would be used?
This would greatly speed up the transition between images when a user is navigating through images in a folder, making Peek appear more responsive. The benefit would only increase if those images were larger and required time to load and decode.
### Supporting information
Various dedicated image viewers like ACDsee, IrfanView and others have been doing this for many years, and it is a standard default feature in those applications. | Needs-Triage | low | Minor |
2,546,611,567 | pytorch | DistributedSampler shuffle option doesn't work as expected | ### 🐛 Describe the bug
When doing multi-GPU processing and using DistributedSampler, the shuffle=True setting results in some files in the dataset not being used on any of the GPU's. I found this when using the training script for evaluation and not turning off the shuffle. The expected number of output files written being lower than expected. I finally traced this to the fact that some of the dataset files were processed twice, and therefore those outputs were overridden, and some were not processed at all. Setting shuffle=False fixes the problem. This is an acceptable work around for evaluation, but it could have unintended consequences for training.
Shuffle to me means that you are sampling without replacement. The behavior that I see is sampling without replacement for each GPU, but not across the GPU. That is, it is possible for different GPU to end up processing the same file.
Easy way to reproduce is to create a dataset equal to the number of GPU and printout which files each GPU is processing.
Pretty standard prepare DataLoader, only change is to allow different configurations for single vs. multiple GPUs. This code doesn't have the problem for evaluation because number of epochs for evaluation is always 1.
def prepare_dataloader(args, worldSize, dataset: Dataset, rng):
if args.n_epochs == 1:
shuffle_data = False # shuffling is not exclusive between GPU's which causes issues at evaluation
else:
shuffle_data = True
if worldSize == 1:
outLoader = DataLoader(
dataset,
batch_size=args.batch_size,
num_workers=args.num_workers,
pin_memory=True,
generator=rng,
shuffle=True,
)
else:
outLoader = DataLoader(
dataset,
batch_size=args.batch_size,
num_workers=args.num_workers,
pin_memory=True,
shuffle=False,
sampler=DistributedSampler(dataset,num_replicas=worldSize,shuffle = shuffle_data)
)
return outLoader
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.19.1
[conda] libopenvino-pytorch-frontend 2024.4.0 hf9b8971_0 conda-forge
[conda] numpy 1.26.4 py311h7125741_0 conda-forge
[conda] pytorch 2.4.1 py3.11_0 pytorch
[conda] torchaudio 2.4.1 py311_cpu pytorch
[conda] torchmetrics 1.4.2 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.19.1 py311_cpu pytorch
cc @andrewkho @gokulavasan @SsnL @VitalyFedyunin @dzhulgakov | module: dataloader,triaged | low | Critical |
2,546,631,381 | ui | [bug]: fails to update tailwind.config.ts properly in new remix app | ### Describe the bug
in a brand new remix app:
```sh
npx create-remix@latest my-app
```
After running the shadcn install command:
```sh
npx shadcn@latest init
```
It borks the `tailwind.config.ts` file. This is what you end up with. The `fontFamily.sans` is not being updated correctly. I think it has something to do with the quotes. Notice there is no beginning `"` in front of "Inter".
```ts
import type { Config } from "tailwindcss";
export default {
darkMode: ["class"],
content: ["./app/**/{**,.client,.server}/**/*.{js,jsx,ts,tsx}"],
theme: {
extend: {
fontFamily: {
sans: [\n 'Inter"',\n "ui-sans-serif",\n "system-ui",\n "sans-serif",\n 'Apple Color Emoji"',\n 'Segoe UI Emoji"',\n 'Segoe UI Symbol"',\n 'Noto Color Emoji"',\n ]
},
borderRadius: {
lg: 'var(--radius)',
md: 'calc(var(--radius) - 2px)',
sm: 'calc(var(--radius) - 4px)'
},
colors: {
background: 'hsl(var(--background))',
foreground: 'hsl(var(--foreground))',
card: {
DEFAULT: 'hsl(var(--card))',
foreground: 'hsl(var(--card-foreground))'
},
popover: {
DEFAULT: 'hsl(var(--popover))',
foreground: 'hsl(var(--popover-foreground))'
},
primary: {
DEFAULT: 'hsl(var(--primary))',
foreground: 'hsl(var(--primary-foreground))'
},
secondary: {
DEFAULT: 'hsl(var(--secondary))',
foreground: 'hsl(var(--secondary-foreground))'
},
muted: {
DEFAULT: 'hsl(var(--muted))',
foreground: 'hsl(var(--muted-foreground))'
},
accent: {
DEFAULT: 'hsl(var(--accent))',
foreground: 'hsl(var(--accent-foreground))'
},
destructive: {
DEFAULT: 'hsl(var(--destructive))',
foreground: 'hsl(var(--destructive-foreground))'
},
border: 'hsl(var(--border))',
input: 'hsl(var(--input))',
ring: 'hsl(var(--ring))',
chart: {
'1': 'hsl(var(--chart-1))',
'2': 'hsl(var(--chart-2))',
'3': 'hsl(var(--chart-3))',
'4': 'hsl(var(--chart-4))',
'5': 'hsl(var(--chart-5))'
}
}
}
},
plugins: [require("tailwindcss-animate")],
} satisfies Config;
```
### Affected component/components
cli
### How to reproduce
1. `npx create-remix@latest my-app`
2. `cd my-app`
3. `npx shadcn@latest init`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Remix 2.12.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,546,636,307 | vscode | Undo stacks are not granular enough for Korean | Testing #229383
Right now, typing a sentence in Korean and pressing undo will **remove the entire line**. I confirmed with a native Korean speaker that the ideal behavior for undo is to remove the last consonant/vowel (each character is made up of 2-4 consonants/vowels). This might be difficult to do depending on how EditContext words, but a little less better but much better than deleting the whole line is to delete the last fully formed character.

Note that this is not a regression, but it's a pretty bad existing great experience.
| bug,editor-input-IME,undo-redo | low | Minor |
2,546,642,066 | PowerToys | 境界線のないマウスが接続されない | ### Microsoft PowerToys version
v0.84.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
アンインストールしても接続されない
### ✔️ Expected Behavior
接続したい
### ❌ Actual Behavior
ノートパソコン側は正常に反応していて新しいキーの生成も問題なし
デスクトップ側のみ新しいキー生成にも反応せず接続確認もできない
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Mouse Without Borders | low | Minor |
2,546,660,178 | pytorch | [ONNX] Single model export for HF models prompt and token phase | ### 🚀 The feature, motivation and pitch
The kv_cache is used only during the token phase, not during the prompt phase. As a result, the exported model currently works only with one of these phases, depending on the example_inputs provided. There is no direct way to export a model that supports both phases simultaneously.
Efforts have been made outside this project to address this issue, such as in [this pull request](https://github.com/huggingface/optimum/pull/1257) on the Hugging Face Optimum repository. It would be ideal to have this functionality supported natively by the exporter, allowing users to avoid extra dependencies and offering greater flexibility in modifying the source model before exporting.
cc @justinchuby, @xadupre, @titaiwangms, @shubhambhokare1
### Alternatives
_No response_
### Additional context
_No response_ | module: onnx,triaged | low | Minor |
2,546,682,997 | godot | VisualShader: Crash when converting float constants to parameters then undoing and redoing | ### Tested versions
4.3stable
master (custom build based on https://github.com/godotengine/godot/commit/c3e16cda00a9fbec4515142f4c59bc5134f1bfb0)
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 Ti with Max-Q Design (NVIDIA; 32.0.15.5612) - AMD Ryzen 7 3750H with Radeon Vega Mobile Gfx (8 Threads)
### Issue description
When converting one or more FloatConstants to FloatParameters in VisualShader, then undoing and redoing, Godot will crash.
Crash seems to relate to the undo/redo code in https://github.com/godotengine/godot/blob/c3e16cda00a9fbec4515142f4c59bc5134f1bfb0/editor/plugins/visual_shader_editor_plugin.cpp#L4403
This log is from a custom build. I was making changes in this `_convert_constants_to_parameters` function, but the crash still happens when I remove my changes, and happens consistently in 4.3 as well. (To be clear, the log is from a build with no changes to the function)
```
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_glyph_count (modules\text_server_adv\text_server_adv.cpp:6554)
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_glyphs (modules\text_server_adv\text_server_adv.cpp:6543)
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_glyph_count (modules\text_server_adv\text_server_adv.cpp:6554)
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_glyphs (modules\text_server_adv\text_server_adv.cpp:6543)
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_glyph_count (modules\text_server_adv\text_server_adv.cpp:6554)
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_glyphs (modules\text_server_adv\text_server_adv.cpp:6543)
ERROR: Parameter "sd" is null.
at: TextServerAdvanced::_shaped_text_get_size (modules\text_server_adv\text_server_adv.cpp:6645)
================================================================
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (0c6c836a95d02191d46af3c684f1508eefbde1fe)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] StyleBox::get_margin (C:\Users\admin\Documents\GitHub\godot\scene\resources\style_box.cpp:81)
[1] LineEdit::set_caret_column (C:\Users\admin\Documents\GitHub\godot\scene\gui\line_edit.cpp:1920)
[2] LineEdit::insert_text_at_caret (C:\Users\admin\Documents\GitHub\godot\scene\gui\line_edit.cpp:1998)
[3] LineEdit::set_text (C:\Users\admin\Documents\GitHub\godot\scene\gui\line_edit.cpp:1747)
[4] VisualShaderGraphPlugin::set_parameter_name (C:\Users\admin\Documents\GitHub\godot\editor\plugins\visual_shader_editor_plugin.cpp:327)
[5] VisualShaderEditor::_update_parameter (C:\Users\admin\Documents\GitHub\godot\editor\plugins\visual_shader_editor_plugin.cpp:4403)
[6] call_with_variant_args_helper<VisualShaderEditor,enum VisualShader::Type,int,Variant const &,int,0,1,2,3> (C:\Users\admin\Documents\GitHub\godot\core\variant\binder_common.h:304)
[7] call_with_variant_args_dv<VisualShaderEditor,enum VisualShader::Type,int,Variant const &,int> (C:\Users\admin\Documents\GitHub\godot\core\variant\binder_common.h:451)
[8] MethodBindT<VisualShaderEditor,enum VisualShader::Type,int,Variant const &,int>::call (C:\Users\admin\Documents\GitHub\godot\core\object\method_bind.h:347)
[9] Object::callp (C:\Users\admin\Documents\GitHub\godot\core\object\object.cpp:813)
[10] Callable::callp (C:\Users\admin\Documents\GitHub\godot\core\variant\callable.cpp:69)
[11] CallableCustomBind::call (C:\Users\admin\Documents\GitHub\godot\core\variant\callable_bind.cpp:152)
[12] Callable::callp (C:\Users\admin\Documents\GitHub\godot\core\variant\callable.cpp:57)
[13] UndoRedo::_process_operation_list (C:\Users\admin\Documents\GitHub\godot\core\object\undo_redo.cpp:365)
[14] UndoRedo::_redo (C:\Users\admin\Documents\GitHub\godot\core\object\undo_redo.cpp:82)
[15] UndoRedo::redo (C:\Users\admin\Documents\GitHub\godot\core\object\undo_redo.cpp:423)
[16] EditorUndoRedoManager::redo_history (C:\Users\admin\Documents\GitHub\godot\editor\editor_undo_redo_manager.cpp:349)
[17] EditorUndoRedoManager::redo (C:\Users\admin\Documents\GitHub\godot\editor\editor_undo_redo_manager.cpp:336)
[18] EditorNode::_menu_option_confirm (C:\Users\admin\Documents\GitHub\godot\editor\editor_node.cpp:2860)
[19] EditorNode::_menu_option (C:\Users\admin\Documents\GitHub\godot\editor\editor_node.cpp:1445)
[20] call_with_variant_args_helper<EditorNode,int,0> (C:\Users\admin\Documents\GitHub\godot\core\variant\binder_common.h:304)
[21] call_with_variant_args<EditorNode,int> (C:\Users\admin\Documents\GitHub\godot\core\variant\binder_common.h:418)
[22] CallableCustomMethodPointer<EditorNode,int>::call (C:\Users\admin\Documents\GitHub\godot\core\object\callable_method_pointer.h:103)
[23] Callable::callp (C:\Users\admin\Documents\GitHub\godot\core\variant\callable.cpp:57)
[24] Object::emit_signalp (C:\Users\admin\Documents\GitHub\godot\core\object\object.cpp:1201)
[25] Node::emit_signalp (C:\Users\admin\Documents\GitHub\godot\scene\main\node.cpp:3975)
[26] Object::emit_signal<int> (C:\Users\admin\Documents\GitHub\godot\core\object\object.h:921)
[27] PopupMenu::activate_item (C:\Users\admin\Documents\GitHub\godot\scene\gui\popup_menu.cpp:2437)
[28] PopupMenu::activate_item_by_event (C:\Users\admin\Documents\GitHub\godot\scene\gui\popup_menu.cpp:2359)
[29] MenuBar::shortcut_input (C:\Users\admin\Documents\GitHub\godot\scene\gui\menu_bar.cpp:167)
[30] Node::_call_shortcut_input (C:\Users\admin\Documents\GitHub\godot\scene\main\node.cpp:3435)
[31] SceneTree::_call_input_pause (C:\Users\admin\Documents\GitHub\godot\scene\main\scene_tree.cpp:1300)
[32] Viewport::_push_unhandled_input_internal (C:\Users\admin\Documents\GitHub\godot\scene\main\viewport.cpp:3220)
[33] Viewport::push_input (C:\Users\admin\Documents\GitHub\godot\scene\main\viewport.cpp:3182)
[34] Window::_window_input (C:\Users\admin\Documents\GitHub\godot\scene\main\window.cpp:1680)
[35] call_with_variant_args_helper<Window,Ref<InputEvent> const &,0> (C:\Users\admin\Documents\GitHub\godot\core\variant\binder_common.h:304)
[36] call_with_variant_args<Window,Ref<InputEvent> const &> (C:\Users\admin\Documents\GitHub\godot\core\variant\binder_common.h:418)
[37] CallableCustomMethodPointer<Window,Ref<InputEvent> const &>::call (C:\Users\admin\Documents\GitHub\godot\core\object\callable_method_pointer.h:103)
[38] Callable::callp (C:\Users\admin\Documents\GitHub\godot\core\variant\callable.cpp:57)
[39] Callable::call<Ref<InputEvent> > (C:\Users\admin\Documents\GitHub\godot\core\variant\variant.h:875)
[40] DisplayServerWindows::_dispatch_input_event (C:\Users\admin\Documents\GitHub\godot\platform\windows\display_server_windows.cpp:3733)
[41] DisplayServerWindows::_dispatch_input_events (C:\Users\admin\Documents\GitHub\godot\platform\windows\display_server_windows.cpp:3703)
[42] Input::_parse_input_event_impl (C:\Users\admin\Documents\GitHub\godot\core\input\input.cpp:803)
[43] Input::flush_buffered_events (C:\Users\admin\Documents\GitHub\godot\core\input\input.cpp:1084)
[44] DisplayServerWindows::process_events (C:\Users\admin\Documents\GitHub\godot\platform\windows\display_server_windows.cpp:3183)
[45] OS_Windows::run (C:\Users\admin\Documents\GitHub\godot\platform\windows\os_windows.cpp:1771)
[46] widechar_main (C:\Users\admin\Documents\GitHub\godot\platform\windows\godot_windows.cpp:180)
[47] _main (C:\Users\admin\Documents\GitHub\godot\platform\windows\godot_windows.cpp:206)
[48] main (C:\Users\admin\Documents\GitHub\godot\platform\windows\godot_windows.cpp:220)
[49] WinMain (C:\Users\admin\Documents\GitHub\godot\platform\windows\godot_windows.cpp:234)
[50] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[51] <couldn't map PC to fn name>
-- END OF BACKTRACE --
```
### Steps to reproduce
Create a new visual shader.
Add a FloatConstant node.
Select the node, right click and "Convert Constant(s) to Parameter(s)".
Undo with `ctrl+z`.
Redo with `ctrl+y`.
Observe crash.
It may be easier to reproduce if you add several FloatConstants and mass convert.
Repeatedly undoing and redoing may also help reproduce.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,crash,topic:shaders | low | Critical |
2,546,707,105 | ant-design | 动态规则rules为空时,已校验出的错误无法被消除 | ### Reproduction link
https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-sr3vgc?workspaceId=f8c8b8be-9c36-4d85-adcf-c7766abab738
### Steps to reproduce
1. 选中昵称是否需要校验,此时会显示昵称的错误信息
2. 取消昵称是否需要校验,此时不会取消昵称错误信息

### What is expected?
当rules为空数组时,能正确的将错误信息去掉
### What is actually happening?
当rules为空数组时,无法移除已经校验的错误信息
| Environment | Info |
| --- | --- |
| antd | 4.24.16 |
| React | 17.x |
| System | mac os |
| Browser | chrome |
---
经过断点排查发现这个错误来自rc-form-field/src/useForm.ts#904 当rule为空时,无法不再触发校验逻辑。 代码如下

### 目前的临时解决方案
```
/**
* 空规则标识
* gai
*/
const EMPTY_RULE_SYMBOL = Symbol('EMPTY_SYMBOL');
/**
* 为了规避rc-form-field的校验bug
* 具体代码可以参考rc-form-field/src/useForm.ts#904
* @returns
*/
export const createEmptyRule = (): any => {
return {
[EMPTY_RULE_SYMBOL]: true,
};
};
// 需要为空时
setRules([createEmptyRule]);
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive | low | Critical |
2,546,757,551 | langchain | Upgraded to v0.3. Encountering Exception: RunnableSequence' object has no attribute 'get' when instatiating ReduceDocumentsChain | ### Discussed in https://github.com/langchain-ai/langchain/discussions/26785
<div type='discussions-op-text'>
<sup>Originally posted by **dubbl-d** September 23, 2024</sup>
### Checked other resources
- [X] I added a very descriptive title to this question.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
python reduce_prompt = PromptTemplate.from_template(template=topic_summary_reduce_prompt,
partial_variables={
"format_instructions": transcript_summary_parser.get_format_instructions()})
combine_documents_chain = create_stuff_documents_chain(self.llm, reduce_prompt,
document_variable_name="topic_summaries")
reduce_documents_chain = ReduceDocumentsChain(combine_documents_chain=combine_documents_chain,
collapse_documents_chain=combine_documents_chain,
token_max=4000, verbose=True
)
```
### Description
I am trying to get rid of deprecation warnings and one of the things I have to do is replace the StuffDocumentsChain with the preferred `create_stuff_documents_chain(...)` method. In the code above, I am getting an exception in pydantic validation code that is looking for a `get` attribute on the `RunnableSequence` object. I have checked with the 0.3 langchain docs to make sure my dependencies are correct and I don't reference any Pydantic v1 objects anywhere. This exception happens in the `ReduceDocumentsChain` constructor/__init__ method when it calls pydantic to validate attributes. It isn't clear where or why it is looking to validate a `get` attribute.
My dependencies and system info are in the section below. Here is the stack trace that I am getting from LangSmith traces:
AttributeError("'RunnableSequence' object has no attribute 'get'") Traceback (most recent call last): File "/Users/dubbled/eng/inference/llm/llm_inferences.py", line 108, in infer_transcript_topic_summary reduce_documents_chain = ReduceDocumentsChain(combine_documents_chain=combine_documents_chain, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dubbled/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 112, in __init__ super().__init__(*args, **kwargs) File "/Users/dubbled/.pyenv/versions/3.11.7/lib/python3.11/site-packages/pydantic/main.py", line 212, in __init__ validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dubbled/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py", line 236, in raise_callback_manager_deprecation if values.get("callback_manager") is not None: ^^^^^^^^^^ File "/Users/dubbled/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5704, in __getattr__ attr = getattr(self.bound, name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dubbled/.pyenv/versions/3.11.7/lib/python3.11/site-packages/pydantic/main.py", line 856, in __getattr__ raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') AttributeError: 'RunnableSequence' object has no attribute 'get'
### System Info
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:32:11 PDT 2023; root:xnu-10002.41.9~7/RELEASE_ARM64_T6030
> Python Version: 3.11.7 (main, Jan 10 2024, 14:29:12) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.3.1
> langchain: 0.3.0
> langsmith: 0.1.122
> langchain_cli: 0.0.21
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.3.0
> langchainhub: 0.1.14
> langserve: 0.0.41
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> aiohttp: 3.9.2
> async-timeout: 4.0.3
> fastapi: 0.109.2
> gitpython: 3.1.43
> httpx: 0.26.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.46.0
> orjson: 3.9.14
> packaging: 23.2
> pydantic: 2.9.2
> PyYAML: 6.0.1
> requests: 2.32.3
> SQLAlchemy: 2.0.27
> sse-starlette: 1.8.2
> tenacity: 8.2.3
> tiktoken: 0.7.0
> tomlkit: 0.12.4
> typer[all]: Installed. No version info available.
> types-requests: 2.31.0.20240125
> typing-extensions: 4.12.2
> uvicorn: 0.23.2</div> | 🤖:bug,investigate,Ɑ: core | low | Critical |
2,546,759,749 | ollama | llama runner process has terminated: exit status 0xc0000005 | ### What is the issue?
It's again the https://github.com/ollama/ollama/issues/6011 issue.
**The issue is with embedding call with the model converted using convert_hf_to_gguf.py.**
litellm.llms.ollama.OllamaError: {"error":"llama runner process has terminated: exit status 0xc0000005"}
```
INFO [wmain] system info | n_threads=6 n_threads_batch=6 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="18380" timestamp=1727231008 total_threads=12
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="13505" tid="18380" timestamp=1727231008
llama_model_loader: loaded meta data with 26 key-value pairs and 389 tensors from C:\Users\Administrator\.ollama\models\blobs\sha256-aad91e93e9ec705a527cfa8701698055cf473223437acd029762bb77be6fc92d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Conan_Embedding_V1
llama_model_loader: - kv 3: general.size_label str = 324M
llama_model_loader: - kv 4: general.license str = cc-by-nc-4.0
llama_model_loader: - kv 5: general.tags arr[str,1] = ["mteb"]
llama_model_loader: - kv 6: bert.block_count u32 = 24
llama_model_loader: - kv 7: bert.context_length u32 = 512
llama_model_loader: - kv 8: bert.embedding_length u32 = 1024
llama_model_loader: - kv 9: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 10: bert.attention.head_count u32 = 16
llama_model_loader: - kv 11: bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 12: general.file_type u32 = 1
llama_model_loader: - kv 13: bert.attention.causal bool = false
llama_model_loader: - kv 14: bert.pooling_type u32 = 1
llama_model_loader: - kv 15: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 16: tokenizer.ggml.model str = bert
llama_model_loader: - kv 17: tokenizer.ggml.pre str = Conan-embedding-v1
llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,21128] = ["[PAD]", "[unused1]", "[unused2]", "...
llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,21128] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 21: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 23: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 24: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 244 tensors
llama_model_loader: - type f16: 145 tensors
llm_load_vocab: special tokens cache size = 5
llm_load_vocab: token to piece cache size = 0.0769 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 21128
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 512
llm_load_print_meta: n_embd = 1024
llm_load_print_meta: n_layer = 24
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 1.0e-12
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 4096
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 0
llm_load_print_meta: pooling type = 1
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 512
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 335M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 324.47 M
llm_load_print_meta: model size = 620.50 MiB (16.04 BPW)
llm_load_print_meta: general.name = Conan_Embedding_V1
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: CLS token = 101 '[CLS]'
llm_load_print_meta: MASK token = 103 '[MASK]'
llm_load_print_meta: LF token = 0 '[PAD]'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.16 MiB
llm_load_tensors: CPU buffer size = 620.50 MiB
time=2024-09-25T10:23:28.796+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 192.00 MiB
llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.00 MiB
llama_new_context_with_model: CPU compute buffer size = 26.00 MiB
llama_new_context_with_model: graph nodes = 851
llama_new_context_with_model: graph splits = 1
time=2024-09-25T10:23:30.338+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
time=2024-09-25T10:23:31.963+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
time=2024-09-25T10:23:32.226+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000005"
[GIN] 2024/09/25 - 10:23:32 | 500 | 3.7323168s | 127.0.0.1 | POST "/api/embed"
```
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.11 0.3.12 | bug,model request | low | Critical |
2,546,765,963 | PowerToys | I recommend adding a configuration item that ignores case when the user enters instructions | ### Description of the new feature / enhancement
Want to be able to ignore case when typing instructions
### Scenario when this would be used?
when typing instructions
### Supporting information
Current situation


| Needs-Triage | low | Minor |
2,546,805,002 | tauri | [feat] When there are multiple webviews, how do I control the hierarchical relationship of each webview? | ### Describe the problem
After creating multiple webviews, I cannot control the hierarchical relationship of each webview. For example, I need A webview to be displayed above B, or C to be displayed above A webview.
note: This scenario is mainly used for menu webview and main webview. The menu webview needs to be always displayed on top of other webviews.
### Describe the solution you'd like
It can be controlled through settings, such as webview.level(2). The larger the value, the higher the level.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request,scope: unstable flag | low | Minor |
2,546,841,199 | pytorch | automatic_dynamic_shapes for mark_unbacked | ### 🐛 Describe the bug
@jansel and I talked about the exponential specialization problem from 0/1 specialization way back at the very beginning of PT2. Well, I've finally found a case of this actually happening in prod.
Internal xref: https://fb.workplace.com/groups/6829516587176185/posts/1509273649710841/
The example here is there are a number of tensor inputs, whose size ranges 0 - N. Usually these are well above 0/1, but very occasionally one of the inputs is 0/1. This triggers a recompilation. It happens rarely enough that a recompilation only happens every few hours, and the compilation is relatively quick (so no NCCL timeout), so you end up with a lot of compiles.
Is there some way to potentially detect this situation and automatically mark mark_unbacked? Not sure...
### Versions
main
cc @chauhang @penguinwu | triaged,module: dynamic shapes | low | Critical |
2,546,870,264 | vscode | Misalignment of Running script with Output text in Terminal | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Windows 10 Pro 22H2
- Screen resolution: 1366 x 768 (sharing as I don't have access to other resolutions or better hardware)
Steps to Reproduce:
1. Open the sidebar and/or panel to see the output.
2. Run a simple program.
3. Check the output for inconsistency with the Python run script and the output.
4. Problem fixes itself if the size of the panel is adjusted by dragging the border.
### Screenshots
Before dragging:

After dragging:

Same in sidebar:

Sidebar after dragging:

Code for the simple python program:
```py
import pandas as pd
import numpy as np
a = {'A':1, 'B':2, 'C':3, 'D':4}
s = pd.Series(a)
print(s)
``` | bug,terminal,confirmation-pending,python | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.