id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,465,523,545 | angular | devtools injector tree tab doesn't show a provider from root injector | ### Which @angular/* package(s) are the source of the bug?
devtools
### Is this a regression?
Yes
### Description
**A provider within the root injector should be shown, but it's not.** The injectable to be found within devtools tab is called `SignalStore` (created by `@ngrx/signals` package).
REPRO steps:
- run the app
- click to the "EMPLOYEES" link (to enable routing and display components)
SETUP
- the SignalStore is injected in only 1 place:
- employee-listing component
- the SignalStore is provided in 4 places:
1. employee-listing component
2. employee-page component (parent)
3. route (app.routes.ts)
4. in root injector
While browsing through "injector tree" tab I can see all of them WITHOUT the root one:
(1)
https://github.com/ducin/angular-bug-repros/blob/devtools_injector_tree_provider_missing_in_root_injector/src/app/employees/employee-listing.component.ts#L24

(2)
https://github.com/ducin/angular-bug-repros/blob/devtools_injector_tree_provider_missing_in_root_injector/src/app/employees/employee-page.component.ts#L21

(3)
https://github.com/ducin/angular-bug-repros/blob/devtools_injector_tree_provider_missing_in_root_injector/src/app/app.routes.ts#L17

BUT for the root injector (4) it's not there:
https://github.com/ducin/angular-bug-repros/blob/devtools_injector_tree_provider_missing_in_root_injector/src/app/employees/employee-store.ts#L35

### Please provide a link to a minimal reproduction of the bug
https://github.com/ducin/angular-bug-repros/tree/devtools_injector_tree_provider_missing_in_root_injector
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
```true
angular v18.1.3, devtools: 1.0.17
node: v20.13.1
npm: 10.5.2
os: macos sonoma v14.5
```
### Anything else?
There is a slight possibility that this is related to ngrx/signals itself, but, basing on the code below, I don't think so...
https://github.com/ngrx/platform/blob/main/modules/signals/src/signal-store.ts#L1344C9-L1350
CC @markostanimirovic
Also, when all providers are removed (1-component, 2 - parent component, 3 - route) and only `"providedIn": root` is left, then the SignalStore provider is visible

**however, this is NOT expected behavior** | area: devtools | low | Critical |
2,465,610,155 | vscode | Error in "diffEditor.revert" command when invoked though keybinding | Type: <b>Bug</b>
Tried this on stable and insider version of vscode with disabled extensions.
Open some dir, with two files, open first file to edit select 'compare with ...' command, diff editor opened, cursor to any difference, see right arrow icon between splitted panes, right click on it and assign keybinding, go back to splitted pane, pass assigned key, see errror: "Cannot read properties of undefined (reading 'originalUri')"
VS Code version: Code - Insiders 1.93.0-insider (b45f04309feede3182ac4a7b945df1e64663a222, 2024-08-14T05:03:36.595Z)
OS version: Linux x64 6.8.0-40-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 5 4600H with Radeon Graphics (12 x 3977)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|2, 1, 1|
|Memory (System)|15.00GB (2.87GB free)|
|Process Argv|--disable-extensions --user-data-dir /tmp/vscode-ins --crash-reporter-id 9e3e11ab-a688-49f4-a1e5-3de3e12bca29|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu-xorg|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu-xorg|
|XDG_SESSION_TYPE|x11|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonregdiag2:30926734
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
g316j359:31013175
a69g1124:31018687
dvdeprecation:31040973
dwnewjupyter:31046869
legacy_priority:31057981
nativerepl1:31104042
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31105043
wkspc-onlycs-c:31111717
nativeloc1:31111755
wkspc-ranged-t:31111713
cf971741:31111988
jh802675:31111929
```
</details>
<!-- generated by issue reporter -->

| bug,diff-editor | low | Critical |
2,465,653,191 | react | [React 19] No re-render after 'useActionState' action queue finishes | ## Summary
When multiple client side actions are scheduled via `useActionState`, the "Action queue" promises are processed sequentially as expected. However after the last action promise resolves, the component is not re-rendered. This means, the component is stuck in "loading" without access to "data".
### Steps to reproduce
1. Open the demo here: https://codesandbox.io/p/sandbox/use-action-state-stuck-xl72xk?file=%2Fsrc%2FApp.js
2. Click the "Send request" button two times.
3. After 10 seconds (each request is 5 seconds and processed sequentially), the component still shows "Loading..." and not the dummy data (as I would expect).
#### Notes
* When only one "action" is scheduled (button clicked once), the component re-renders when the action is done, as expected.
* The promise "delay" seems to have an effect. When `REQUEST_DELAY` is set lower i.e. 1000ms, this "issue" is not present.
* Possibly related: https://github.com/facebook/react/issues/27630
Is this behavior intentional?
| Type: Bug,React 19 | medium | Major |
2,465,670,108 | stable-diffusion-webui | [Bug]: Runtime error , torch is not able to use GPU | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I'm sorry, I'm a french man and totally newbie here.
I'd like to install stable diffusion just to create AI image for fun :)
I try my best to follow this tutorial https://www.youtube.com/watch?v=onmqbI5XPH8&t=4s...
I've been trying for 3 days now to install,unistall it and run into a lot of different problem (python localisation, web.user bat...)
Today I followed up this video below. It was fine until to the point it saying that

As someone suggested I try to add a line oneweb-user.bat file to skip the conda check-up and even uninstall the VERV file in order to create another one. I updated pip files etc... It's really infuriating as every time I'm about to manage to install it, there's always an error somewhere... The installation is clean...
### Steps to reproduce the problem
As the Youtube video says...
### What should have happened?
I should the installation done and got an URL adress on the cmd to use it to have stable diffusion...
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
?
### Console logs
```Shell
?
```
### Additional information
I'm sorry, i'm unable to replies to the above questions because I'm totally new to it and didn't find the console log or the sysinfo... | bug-report | low | Critical |
2,465,703,798 | rust | Tracking Issue for box_as_ptr | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(box_as_ptr)]`
This is a tracking issue for `Box::as_ptr` and `Box::as_mut_ptr`.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
impl<T: ?Sized, A: Allocator> Box<T, A> {
pub fn as_mut_ptr(b: &mut Self) -> *mut T;
pub fn as_ptr(b: &Self) -> *const T;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: https://github.com/rust-lang/libs-team/issues/355
- [ ] Implementation: https://github.com/rust-lang/rust/pull/129091
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| C-tracking-issue,T-libs,A-box | low | Minor |
2,465,713,254 | vscode | `Load More Stack Frames` renders poorly when horizontalScrolling enabled | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: N/A (need *some* debugger extension to test)
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.1
- OS Version: Arch LInux x64 6.6.44-3-lts
Steps to Reproduce:
1. Open the debugger for any language.
2. Pause on a breakpoint. Either with a deep enough stack so that it doesn't load fully and shows `Show N More Stack Frames` message. OR some debugger extensions apparently don't load the stack at all by default, and display `Load More Stack Frames` after the first entry.
3. Make sure the stack frames already shown have very long names.
4. Observe that the `Load More Stack Frames` (or `Show N More Stack Frames`) is centered relative to the longest frame name, making it completely invisible if frame names are long and the panel is narrow. See picture:
Completely invisible on small panel widths:

Only starts appearing on large panel widths:

I suggest that this text should be centered based on the physical panel width (ignoring the scroll). | bug,debug | low | Critical |
2,465,726,278 | vscode | Drag sideways on minimap to pan? | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Drag sideways on minimap code/outline to scroll sideways.
Possibly hold down `Ctrl` key or similar to enable this behaviour to prevent unintentional sideways scrolling.
I think it would be 🌈**amazing**🌈 for wide documents :D ❤️
| feature-request,editor-minimap | low | Minor |
2,465,813,918 | godot | C# Compute Shaders run significantly slower than GDScript compute shaders. (Godot 4.3 RC3) | ### Tested versions
- Reproducible in Godot 4.3 RC3
### System information
EndeavorOS Linux (Arch based). CPU - Intel i7-1165G7 (iGPU). Drivers - vulkan-intel/mesa
### Issue description
Project using a GPU Poisson Disk Sampling shader. When running via c# version of rd.Submit(), shader takes upwards of 300-500ms each to run.
When calling from a GDScript version though, (even from inside a C# program) each shader runs in < 5ms.
Huge discrepancy, and not entirely sure if this is a bug, or just a current limitation of the C# implementation.
### Steps to reproduce
Load up MRP, in Main, toggle "Gd Script Shader" export variable on and off. Time Elapsed in ms is posted to Output terminal. GDScript version of Shader Code runs in < 150ms total whereas the C# version of the Shader code takes almost 3.5s.
Both use the same glsl file.
### Minimal reproduction project (MRP)
[Archive.zip](https://github.com/user-attachments/files/16613598/Archive.zip)
| bug,topic:rendering,topic:dotnet,performance | low | Critical |
2,465,850,801 | ui | [bug]: react-hook-form Controller and Select Component Issue: Value Resets to Empty String on Selection (Working in Sandbox, Failing in Local Environment) | ### Describe the bug
I'm working on a React project using **react-hook-form** along with the Controller component to manage a form with Select components. However, I'm running into some issues:
# **Initial Form Submission:**
When I submit the form without interacting with the Select components, everything works as expected, and the default values are submitted correctly.
# **Issue After Selection:**
If I interact with any Select component (e.g., propertyType) and make a selection, the value for that field in the submitted form data becomes an empty string.
# **Placeholder Issue:**
The SelectTrigger does not show the default value or selected option. It only shows the placeholder text ("Select a property type") after making a selection, and even then, it does not display the correct option.
**My Code :**
```
"use client";
import { useForm, Controller } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import * as z from "zod";
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { Form, FormField, FormItem, FormLabel, FormControl, FormMessage } from "@/components/ui/form";
const formSchema = z.object({
location: z.string().min(1, "Location is required"),
propertyType: z.string(),
purpose: z.string(),
priceRange: z.string(),
beds: z.string(),
filters: z.string(),
});
const propertyTypeOptions = [
{ value: "residential", label: "All in Residential" },
// Add more options as needed
];
export default function PropertyFilter() {
const form = useForm({
resolver: zodResolver(formSchema),
defaultValues: {
location: "",
propertyType: propertyTypeOptions[0].value,
purpose: "rent",
priceRange: "any",
beds: "any",
filters: "baths-area",
},
});
function onSubmit(values) {
console.log(values);
}
return (
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="form-container">
<FormField
control={form.control}
name="propertyType"
render={({ field }) => (
<FormItem>
<FormLabel>Property Type</FormLabel>
<FormControl>
<Controller
name="propertyType"
control={form.control}
render={({ field }) => (
<Select
onValueChange={field.onChange}
value={field.value}
>
<SelectTrigger>
<SelectValue placeholder="Select a property type" />
</SelectTrigger>
<SelectContent>
{propertyTypeOptions.map((option) => (
<SelectItem key={option.value} value={option.value}>
{option.label}
</SelectItem>
))}
</SelectContent>
</Select>
)}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<button type="submit">Submit</button>
</form>
</Form>
);
}
```
**### Steps Taken:**
- I tried using the Controller component from react-hook-form to manage the Select components.
- I also tried using defaultValue instead of value for the Select component inside the Controller.
- **Weirdly, when I put the exact same code in an online sandbox (like CodeSandbox)**, it works perfectly. The SelectTrigger shows the correct value, and the form data is submitted correctly.
-
### Affected component/components
Select, Form
### How to reproduce
1 - Set Up A Next.js 14 Project
2 - Install react-hook-form and zod:
`npm install react-hook-form @hookform/resolvers zod`
3 - Create a New Component:
`"use client";
import { useForm, Controller } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import * as z from "zod";
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { Form, FormField, FormItem, FormLabel, FormControl, FormMessage } from "@/components/ui/form";
const formSchema = z.object({
location: z.string().min(1, "Location is required"),
propertyType: z.string(),
purpose: z.string(),
priceRange: z.string(),
beds: z.string(),
filters: z.string(),
});
const propertyTypeOptions = [
{ value: "residential", label: "All in Residential" },
// Add more options as needed
];
export default function PropertyFilter() {
const form = useForm({
resolver: zodResolver(formSchema),
defaultValues: {
location: "",
propertyType: propertyTypeOptions[0].value,
purpose: "rent",
priceRange: "any",
beds: "any",
filters: "baths-area",
},
});
function onSubmit(values) {
console.log(values);
}
return (
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="form-container">
<FormField
control={form.control}
name="propertyType"
render={({ field }) => (
<FormItem>
<FormLabel>Property Type</FormLabel>
<FormControl>
<Controller
name="propertyType"
control={form.control}
render={({ field }) => (
<Select
onValueChange={field.onChange}
value={field.value}
>
<SelectTrigger>
<SelectValue placeholder="Select a property type" />
</SelectTrigger>
<SelectContent>
{propertyTypeOptions.map((option) => (
<SelectItem key={option.value} value={option.value}>
{option.label}
</SelectItem>
))}
</SelectContent>
</Select>
)}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<button type="submit">Submit</button>
</form>
</Form>
);
}
`
4 - Test the Component:
Load the component in your app and try submitting the form without interacting with any Select components. Note that the default values are submitted correctly.
Now, interact with the Select component (e.g., propertyType), select an option, and submit the form again. Observe that the value for the selected field is an empty string.
Additionally, note that the placeholder or selected value doesn't display correctly in the SelectTrigger.
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/select-react-hook-form-problem-ygz58v?file=%2Fapp%2Fpage.tsx%3A253%2C54
### Logs
_No response_
### System Info
```bash
Windows 11, Chrome Browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,465,866,607 | next.js | postcss.config.js not work with type module | ### Link to the code that reproduces this issue
https://github.com/alkorlos/nextjs-issue-bug-postcssconfig
### To Reproduce
1. Start the application in development (next dev)
2. Error in console and browser
### Current vs. Expected behavior
I expected `postcss.config.js` work with type module to happen, but I observed error.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Enterprise
Available memory (MB): 16228
Available CPU cores: 4
Binaries:
Node: 20.10.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.115 // Latest available version is detected (15.0.0-canary.115).
eslint-config-next: N/A
react: 19.0.0-rc-187dd6a7-20240806
react-dom: 19.0.0-rc-187dd6a7-20240806
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Instrumentation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
Next.js 14.2.5 also have this issue.
In reproduce new project with `type module` in `package.json` and `postcss.config.js`:
```js
import postcssPresetEnv from 'postcss-preset-env';
const config = {
plugins: [
postcssPresetEnv({
stage: 2,
features: {
'nesting-rules': false
}
})
]
};
export default config;
```
When starting the project, there is an error in the console:
```console
○ Compiling /_not-found ...
Error: An unknown PostCSS plugin was provided ([object Object]).
Read more: https://nextjs.org/docs/messages/postcss-shape
⨯ ./app/layout.module.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[2]!./node_modules/next/dist/build/
webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[3]!./app/layout.module.css
Error: Malformed PostCSS Configuration
at Array.forEach (<anonymous>)
Import trace for requested module:
./app/layout.module.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[2]!./node_modules/next/dist/build/web
pack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[3]!./app/layout.module.css
./app/layout.module.css
⨯ ./app/layout.module.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[2]!./node_modules/next/dist/build/
webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[3]!./app/layout.module.css
Error: Malformed PostCSS Configuration
at Array.forEach (<anonymous>)
Import trace for requested module:
./app/layout.module.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[2]!./node_modules/next/dist/build/web
pack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[3]!./app/layout.module.css
./app/layout.module.css
⨯ ./app/layout.module.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[2]!./node_modules/next/dist/build/
webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[3]!./app/layout.module.css
Error: Malformed PostCSS Configuration
at Array.forEach (<anonymous>)
Import trace for requested module:
./app/layout.module.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[2]!./node_modules/next/dist/build/web
pack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[12].oneOf[5].use[3]!./app/layout.module.css
./app/layout.module.css
GET /_next/static/webpack/8aa696ba82e7cf46.webpack.hot-update.json 500 in 6127ms
⚠ Fast Refresh had to perform a full reload due to a runtime error.
GET / 500 in 1203ms
GET / 500 in 20ms
⨯ ./app/layout.module.css
Error: Malformed PostCSS Configuration
at Array.forEach (<anonymous>)
Import trace for requested module:
./app/layout.module.css
./app/layout.tsx
```
Error in the browser:
```console
Build Error
Failed to compile
./app/layout.module.css
Error: Malformed PostCSS Configuration
at Array.forEach (<anonymous>)
Import trace for requested module:
./app/layout.module.css
./app/layout.tsx
```
This configuration works with other tools.
The error occurs not only with `postcss-preset-env`, but also with other libraries. | bug | low | Critical |
2,465,952,199 | rust | derives: parallel compiler makes builds irreproducible | file:
````rust
//@ check-pass
//@ compile-flags: -Zunpretty=expanded
//@ edition:2021
//
// This test checks the code generated for all[*] the builtin derivable traits
// on a variety of structs and enums. It protects against accidental changes to
// the generated code, and makes deliberate changes to the generated code
// easier to review.
//
// [*] It excludes `Copy` in some cases, because that changes the code
// generated for `Clone`.
//
// [*] It excludes `RustcEncodable` and `RustDecodable`, which are obsolete and
// also require the `rustc_serialize` crate.
#![crate_type = "lib"]
#![allow(dead_code)]
#![allow(deprecated)]
// Empty struct.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
struct Empty;
// A basic struct. Note: because this derives `Copy`, it gets the simple
// `clone` implemention that just does `*self`.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
struct Point {
x: u32,
y: u32,
}
// A basic packed struct. Note: because this derives `Copy`, it gets the simple
// `clone` implemention that just does `*self`.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(packed)]
struct PackedPoint {
x: u32,
y: u32,
}
// A large struct. Note: because this derives `Copy`, it gets the simple
// `clone` implemention that just does `*self`.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
struct Big {
b1: u32, b2: u32, b3: u32, b4: u32, b5: u32, b6: u32, b7: u32, b8: u32,
}
// A struct that doesn't impl `Copy`, which means it gets the non-simple
// `clone` implemention that clones the fields individually.
#[derive(Clone)]
struct NonCopy(u32);
// A packed struct that doesn't impl `Copy`, which means it gets the non-simple
// `clone` implemention that clones the fields individually.
#[derive(Clone)]
#[repr(packed)]
struct PackedNonCopy(u32);
// A struct that impls `Copy` manually, which means it gets the non-simple
// `clone` implemention that clones the fields individually.
#[derive(Clone)]
struct ManualCopy(u32);
impl Copy for ManualCopy {}
// A packed struct that impls `Copy` manually, which means it gets the
// non-simple `clone` implemention that clones the fields individually.
#[derive(Clone)]
#[repr(packed)]
struct PackedManualCopy(u32);
impl Copy for PackedManualCopy {}
// A struct with an unsized field. Some derives are not usable in this case.
#[derive(Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
struct Unsized([u32]);
trait Trait {
type A;
}
// A generic struct involving an associated type.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
struct Generic<T: Trait, U> {
t: T,
ta: T::A,
u: U,
}
// A packed, generic tuple struct involving an associated type. Because it is
// packed, a `T: Copy` bound is added to all impls (and where clauses within
// them) except for `Default`. This is because we must access fields using
// copies (e.g. `&{self.0}`), instead of using direct references (e.g.
// `&self.0`) which may be misaligned in a packed struct.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(packed)]
struct PackedGeneric<T: Trait, U>(T, T::A, U);
// An empty enum.
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum Enum0 {}
// A single-variant enum.
#[derive(Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum Enum1 {
Single { x: u32 }
}
// A C-like, fieldless enum with a single variant.
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum Fieldless1 {
#[default]
A,
}
// A C-like, fieldless enum.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum Fieldless {
#[default]
A,
B,
C,
}
// An enum with multiple fieldless and fielded variants.
#[derive(Clone, Copy, Debug, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum Mixed {
#[default]
P,
Q,
R(u32),
S { d1: Option<u32>, d2: Option<i32> },
}
// An enum with no fieldless variants. Note that `Default` cannot be derived
// for this enum.
#[derive(Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum Fielded {
X(u32),
Y(bool),
Z(Option<i32>),
}
// A generic enum. Note that `Default` cannot be derived for this enum.
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum EnumGeneric<T, U> {
One(T),
Two(U),
}
// An enum that has variant, which does't implement `Copy`.
#[derive(PartialEq)]
enum NonCopyEnum {
// The `dyn NonCopyTrait` implements `PartialEq`, but it doesn't require `Copy`.
// So we cannot generate `PartialEq` with dereference.
NonCopyField(Box<dyn NonCopyTrait>),
}
trait NonCopyTrait {}
impl PartialEq for dyn NonCopyTrait {
fn eq(&self, _other: &Self) -> bool {
true
}
}
// A union. Most builtin traits are not derivable for unions.
#[derive(Clone, Copy)]
pub union Union {
pub b: bool,
pub u: u32,
pub i: i32,
}
````
compiled:
```
rustc file.rs -edition=2021 -Zremap-cwd-prefix=reproducible_dir -Ccodegen-units=1 -Cdebuginfo=1 -Copt-level=3 -o 1 -Zthreads=16
rustc file.rs -edition=2021 -Zremap-cwd-prefix=reproducible_dir -Ccodegen-units=1 -Cdebuginfo=1 -Copt-level=3 -o 2 -Zthreads=16
```
and then diffed: `diff -u <(hexyl 1) <(hexyl 2)`
```
--- /proc/self/fd/11 2024-08-14 16:04:37.970531847 +0200
+++ /proc/self/fd/12 2024-08-14 16:04:37.973865174 +0200
@@ -7,7 +7,7 @@
│00000050│ 20 20 20 20 20 20 20 20 ┊ 20 20 20 20 20 20 20 20 │ ┊ │
│* │ ┊ │ ┊ │
│00000070│ 20 20 20 20 20 20 20 20 ┊ 20 20 20 20 35 34 20 20 │ ┊ 54 │
-│00000080│ 20 20 20 20 20 20 60 0a ┊ 31 2e 64 65 72 69 76 69 │ `_┊1.derivi│
+│00000080│ 20 20 20 20 20 20 60 0a ┊ 32 2e 64 65 72 69 76 69 │ `_┊2.derivi│
│00000090│ 6e 67 5f 61 6c 6c 5f 63 ┊ 6f 64 65 67 65 6e 2e 32 │ng_all_c┊odegen.2│
│000000a0│ 33 62 38 65 30 33 36 33 ┊ 39 61 62 64 30 32 38 2d │3b8e0363┊9abd028-│
│000000b0│ 63 67 75 2e 30 2e 72 63 ┊ 67 75 2e 6f 2f 0a 6c 69 │cgu.0.rc┊gu.o/_li│
@@ -674,26 +674,26 @@
│00002a00│ 01 00 aa 50 17 0e 01 00 ┊ 00 00 00 00 00 00 01 01 │•⋄×P•••⋄┊⋄⋄⋄⋄⋄⋄••│
│00002a10│ 38 10 f3 06 00 00 00 00 ┊ 00 00 02 17 5f 03 00 02 │8•ו⋄⋄⋄⋄┊⋄⋄••_•⋄•│
│00002a20│ 03 00 00 d9 2a 01 00 10 ┊ fa 06 00 00 04 00 02 02 │•⋄⋄×*•⋄•┊ו⋄⋄•⋄••│
-│00002a30│ 01 00 00 28 00 03 0f 26 ┊ 00 00 00 f9 89 01 dc 1e │•⋄⋄(⋄••&┊⋄⋄⋄×וו│
+│00002a30│ 01 00 00 28 00 03 0f 26 ┊ 00 00 00 f9 8a 01 dc 1e │•⋄⋄(⋄••&┊⋄⋄⋄×וו│
│00002a40│ 35 70 02 01 00 01 00 02 ┊ db 95 02 00 30 10 b6 07 │5p••⋄•⋄•┊×ו⋄0•ו│
│00002a50│ 00 00 0f 17 01 00 01 00 ┊ 02 db 95 02 00 30 10 c2 │⋄⋄•••⋄•⋄┊•×ו⋄0•×│
│00002a60│ 07 00 00 00 01 01 d9 2a ┊ 00 0f 59 00 00 00 00 00 │•⋄⋄⋄••×*┊⋄•Y⋄⋄⋄⋄⋄│
│00002a70│ 0f 60 01 00 01 00 00 00 ┊ 00 00 00 00 00 03 03 30 │•`•⋄•⋄⋄⋄┊⋄⋄⋄⋄⋄••0│
│00002a80│ 11 b6 07 00 00 00 04 00 ┊ 02 06 00 01 02 00 01 00 │•ו⋄⋄⋄•⋄┊••⋄••⋄•⋄│
-│00002a90│ ca 4f 29 86 01 e7 cf 01 ┊ 97 01 02 01 04 06 0f 0c │×O)ו×ו┊ו•••••_│
-│00002aa0│ 01 00 06 00 00 00 01 02 ┊ 00 01 00 ca 4f 01 91 86 │•⋄•⋄⋄⋄••┊⋄•⋄×O•××│
+│00002a90│ ca 4f 29 87 01 e7 cf 01 ┊ 97 01 02 01 04 06 0f 0c │×O)ו×ו┊ו•••••_│
+│00002aa0│ 01 00 06 00 00 00 01 02 ┊ 00 01 00 ca 4f 01 91 87 │•⋄•⋄⋄⋄••┊⋄•⋄×O•××│
│00002ab0│ 01 db cf 01 97 01 02 01 ┊ 07 02 4d 88 d6 01 97 01 │•×וו••┊••M×וו│
│00002ac0│ 02 00 02 01 0d 02 e9 55 ┊ 01 01 ed 49 02 00 02 00 │•⋄••_•×U┊••×I•⋄•⋄│
-│00002ad0│ 29 86 01 db cf 01 97 01 ┊ 02 01 06 00 0f 4a 03 00 │)ו×וו┊•••⋄•J•⋄│
-│00002ae0│ 01 01 00 03 61 86 01 e1 ┊ cf 01 97 01 02 00 04 09 │••⋄•aו×┊וו•⋄•_│
-│00002af0│ 86 01 ec cf 01 97 01 02 ┊ 01 05 06 30 11 c2 07 00 │ו×וו•┊•••0•ו⋄│
+│00002ad0│ 29 87 01 db cf 01 97 01 ┊ 02 01 06 00 0f 4a 03 00 │)ו×וו┊•••⋄•J•⋄│
+│00002ae0│ 01 01 00 03 61 87 01 e1 ┊ cf 01 97 01 02 00 04 09 │••⋄•aו×┊וו•⋄•_│
+│00002af0│ 87 01 ec cf 01 97 01 02 ┊ 01 05 06 30 11 c2 07 00 │ו×וו•┊•••0•ו⋄│
│00002b00│ 00 00 05 00 02 06 00 01 ┊ 02 00 01 01 ca 4f 0f 7c │⋄⋄•⋄••⋄•┊•⋄••×O•|│
│00002b10│ 02 04 07 0f 81 02 00 07 ┊ 00 00 00 01 02 00 01 01 │••••ו⋄•┊⋄⋄⋄••⋄••│
│00002b20│ ca 4f 01 0f 75 02 07 02 ┊ 0f 6e 00 02 01 ea 53 02 │×O••u•••┊•n⋄••×S•│
│00002b30│ 01 02 00 0f 63 01 07 00 ┊ 0f a6 00 00 01 02 00 03 │••⋄•c••⋄┊•×⋄⋄••⋄•│
│00002b40│ 0f 5c 00 01 0f 55 02 05 ┊ 07 01 00 11 80 07 00 00 │•\⋄••U••┊••⋄•ו⋄⋄│
│00002b50│ 04 00 02 02 01 00 00 2a ┊ 00 03 20 11 fc 06 00 00 │•⋄•••⋄⋄*┊⋄• •ו⋄⋄│
-│00002b60│ 00 00 f9 86 01 9c cf 01 ┊ 63 97 01 02 01 00 01 00 │⋄⋄×ו×ו┊cו••⋄•⋄│
+│00002b60│ 00 00 f9 87 01 9c cf 01 ┊ 63 97 01 02 01 00 01 00 │⋄⋄×ו×ו┊cו••⋄•⋄│
│00002b70│ 02 e0 ed 02 01 01 ed 49 ┊ 0f f9 00 0f 19 01 00 01 │•×ו••×I┊•×⋄•••⋄•│
│00002b80│ 00 02 e0 ed 02 01 01 ed ┊ 49 0f 8e 00 00 08 01 9e │⋄•×ו••×┊I•×⋄⋄••×│
│00002b90│ 47 00 0f 38 00 00 8c 4f ┊ 00 0f 3f 00 00 eb 49 00 │G⋄•8⋄⋄×O┊⋄•?⋄⋄×I⋄│
@@ -701,8 +701,8 @@
│00002bb0│ 30 01 00 00 d0 51 00 0f ┊ bc 00 01 ca 4f 00 17 2c │0•⋄⋄×Q⋄•┊×⋄•×O⋄•,│
│00002bc0│ 01 01 01 ca 4f 00 17 34 ┊ 01 02 00 02 00 06 02 1b │•••×O⋄•4┊••⋄•⋄•••│
│00002bd0│ 0f 76 00 00 00 01 00 01 ┊ 01 00 02 e2 0d 0f 83 00 │•v⋄⋄⋄•⋄•┊•⋄•×_•×⋄│
-│00002be0│ 00 00 02 00 01 02 00 02 ┊ 1b 29 86 01 af cf 01 97 │⋄⋄•⋄••⋄•┊•)ו×ו×│
-│00002bf0│ 01 02 01 00 00 04 00 01 ┊ 01 00 02 e2 0d 29 86 01 │•••⋄⋄•⋄•┊•⋄•×_)ו│
+│00002be0│ 00 00 02 00 01 02 00 02 ┊ 1b 29 87 01 af cf 01 97 │⋄⋄•⋄••⋄•┊•)ו×ו×│
+│00002bf0│ 01 02 01 00 00 04 00 01 ┊ 01 00 02 e2 0d 29 87 01 │•••⋄⋄•⋄•┊•⋄•×_)ו│
│00002c00│ b6 cf 01 97 01 02 01 00 ┊ 00 02 00 01 02 00 02 1b │×וו••⋄┊⋄•⋄••⋄••│
│00002c10│ 0f 27 02 00 00 05 00 01 ┊ 01 00 02 e2 0d 0f 20 02 │•'•⋄⋄•⋄•┊•⋄•×_• •│
│00002c20│ 00 00 02 00 01 02 00 0f ┊ cd 01 00 01 02 00 ea 53 │⋄⋄•⋄••⋄•┊ו⋄••⋄×S│
@@ -735,12 +735,12 @@
│00002dd0│ 00 0f 2e 01 00 01 00 00 ┊ 00 00 00 00 00 00 04 0b │⋄•.•⋄•⋄⋄┊⋄⋄⋄⋄⋄⋄••│
│00002de0│ 30 14 b6 07 00 00 00 04 ┊ 00 02 06 00 01 02 00 01 │0•ו⋄⋄⋄•┊⋄••⋄••⋄•│
│00002df0│ 00 ca 4f 0f 13 00 00 05 ┊ 00 02 06 00 02 02 00 01 │⋄×O••⋄⋄•┊⋄••⋄••⋄•│
-│00002e00│ 00 ca 4f 29 85 01 f6 88 ┊ 03 66 02 02 04 0a 0f 0b │⋄×O)ו××┊•f•••_••│
-│00002e10│ 02 00 0a 00 00 00 01 02 ┊ 00 01 00 ca 4f 31 85 01 │•⋄_⋄⋄⋄••┊⋄•⋄×O1ו│
+│00002e00│ 00 ca 4f 29 86 01 f6 88 ┊ 03 66 02 02 04 0a 0f 0b │⋄×O)ו××┊•f•••_••│
+│00002e10│ 02 00 0a 00 00 00 01 02 ┊ 00 01 00 ca 4f 31 86 01 │•⋄_⋄⋄⋄••┊⋄•⋄×O1ו│
│00002e20│ fd 88 03 66 02 02 04 0b ┊ 0f 0b 02 00 0b 00 00 00 │×וf••••┊•••⋄•⋄⋄⋄│
-│00002e30│ 02 02 00 01 00 ca 4f f9 ┊ 85 01 d1 88 03 33 66 02 │••⋄•⋄×O×┊ו×ו3f•│
-│00002e40│ 02 00 09 00 07 18 01 0a ┊ 00 01 0b 00 09 85 01 83 │•⋄_⋄•••_┊⋄••⋄_ו×│
-│00002e50│ 89 03 66 02 02 05 0b 0f ┊ 0b 02 05 0a f9 85 01 cc │וf•••••┊•••_×ו×│
+│00002e30│ 02 02 00 01 00 ca 4f f9 ┊ 86 01 d1 88 03 33 66 02 │••⋄•⋄×O×┊ו×ו3f•│
+│00002e40│ 02 00 09 00 07 18 01 0a ┊ 00 01 0b 00 09 86 01 83 │•⋄_⋄•••_┊⋄••⋄_ו×│
+│00002e50│ 89 03 66 02 02 05 0b 0f ┊ 0b 02 05 0a f9 86 01 cc │וf•••••┊•••_×ו×│
│00002e60│ 88 03 39 66 02 02 00 03 ┊ 00 0b 02 02 d3 e2 02 01 │ו9f••⋄•┊⋄•••×ו•│
│00002e70│ 01 01 ac 4c 00 00 01 00 ┊ 09 00 0f 9a 00 00 06 00 │••×L⋄⋄•⋄┊_⋄•×⋄⋄•⋄│
│00002e80│ 0a 09 00 01 0f a4 00 01 ┊ 01 06 00 01 00 02 02 01 │__⋄••×⋄•┊••⋄•⋄•••│
@@ -753,10 +753,10 @@
│00002ef0│ 02 02 00 01 01 ca 4f 0f ┊ c0 03 00 0c 00 07 18 01 │••⋄••×O•┊ו⋄_⋄•••│
│00002f00│ 0d 00 01 0e 00 0f b9 03 ┊ 05 0e 0f be 03 05 0d 0f │_⋄••⋄•ו┊•••ו•_•│
│00002f10│ b3 03 00 00 00 0b 02 02 ┊ d3 e2 02 01 01 01 ac 4c │ו⋄⋄⋄•••┊×ו•••×L│
-│00002f20│ 00 00 01 01 0c 00 09 85 ┊ 01 84 89 03 66 02 03 05 │⋄⋄••_⋄_×┊•×וf•••│
+│00002f20│ 00 00 01 01 0c 00 09 86 ┊ 01 84 89 03 66 02 03 05 │⋄⋄••_⋄_×┊•×וf•••│
│00002f30│ 0c 01 0f 95 00 00 03 00 ┊ 00 01 00 14 9b 07 00 00 │_••×⋄⋄•⋄┊⋄•⋄•ו⋄⋄│
│00002f40│ 04 00 02 02 01 00 00 32 ┊ 00 04 50 14 91 07 00 00 │•⋄•••⋄⋄2┊⋄•P•ו⋄⋄│
-│00002f50│ 00 00 17 72 01 01 00 00 ┊ 00 f9 85 01 80 88 03 97 │⋄⋄•r••⋄⋄┊⋄×ו×ו×│
+│00002f50│ 00 00 17 72 01 01 00 00 ┊ 00 f9 86 01 80 88 03 97 │⋄⋄•r••⋄⋄┊⋄×ו×ו×│
│00002f60│ 01 66 02 01 00 01 00 02 ┊ ef 8d 02 00 17 8c 01 00 │•f••⋄•⋄•┊×ו⋄•ו⋄│
│00002f70│ 0f 17 01 00 01 00 02 ef ┊ 8d 02 00 0f d4 00 00 0f │•••⋄•⋄•×┊ו⋄•×⋄⋄•│
│00002f80│ 01 e9 4b 00 0f 3a 00 00 ┊ 8c 4f 00 0f 41 00 00 8c │•×K⋄•:⋄⋄┊×O⋄•A⋄⋄×│
@@ -769,18 +769,18 @@
│00002ff0│ 00 17 d4 01 03 00 02 00 ┊ 07 02 1b 0f b1 00 00 00 │⋄•ו•⋄•⋄┊••••×⋄⋄⋄│
│00003000│ 01 00 01 01 00 02 87 0a ┊ 0f be 00 00 00 02 00 01 │•⋄••⋄•×_┊•×⋄⋄⋄•⋄•│
│00003010│ 02 00 02 f9 03 17 35 02 ┊ 01 00 00 03 00 00 02 1b │•⋄•ו•5•┊•⋄⋄•⋄⋄••│
-│00003020│ 29 85 01 8f 88 03 66 02 ┊ 02 00 00 04 00 01 01 00 │)ו×וf•┊•⋄⋄•⋄••⋄│
-│00003030│ 02 87 0a 29 85 01 96 88 ┊ 03 66 02 02 00 00 05 00 │•×_)ו××┊•f••⋄⋄•⋄│
+│00003020│ 29 86 01 8f 88 03 66 02 ┊ 02 00 00 04 00 01 01 00 │)ו×וf•┊•⋄⋄•⋄••⋄│
+│00003030│ 02 87 0a 29 86 01 96 88 ┊ 03 66 02 02 00 00 05 00 │•×_)ו××┊•f••⋄⋄•⋄│
│00003040│ 01 02 00 02 1b 0f 25 03 ┊ 00 00 07 00 01 01 00 02 │••⋄•••%•┊⋄⋄•⋄••⋄•│
│00003050│ 87 0a 0f 1f 03 00 00 08 ┊ 00 01 02 00 17 12 01 01 │×_•••⋄⋄•┊⋄••⋄••••│
│00003060│ 00 01 00 00 00 00 00 00 ┊ 00 00 04 0a 30 15 b6 07 │⋄•⋄⋄⋄⋄⋄⋄┊⋄⋄•_0•ו│
│00003070│ 00 00 00 04 00 02 06 00 ┊ 01 02 00 01 00 ca 4f 0f │⋄⋄⋄•⋄••⋄┊••⋄•⋄×O•│
│00003080│ 13 00 00 05 00 02 06 00 ┊ 02 02 00 01 00 ca 4f 29 │•⋄⋄•⋄••⋄┊••⋄•⋄×O)│
-│00003090│ 85 01 b4 8e 03 66 02 02 ┊ 04 09 0f 0b 02 00 09 00 │ו×וf••┊•_•••⋄_⋄│
-│000030a0│ 00 00 01 02 00 01 00 ca ┊ 4f 31 85 01 bb 8e 03 66 │⋄⋄••⋄•⋄×┊O1ו×וf│
+│00003090│ 86 01 b4 8e 03 66 02 02 ┊ 04 09 0f 0b 02 00 09 00 │ו×וf••┊•_•••⋄_⋄│
+│000030a0│ 00 00 01 02 00 01 00 ca ┊ 4f 31 86 01 bb 8e 03 66 │⋄⋄••⋄•⋄×┊O1ו×וf│
│000030b0│ 02 02 04 0a 0f 0b 02 00 ┊ 0a 00 00 00 02 02 00 01 │•••_•••⋄┊_⋄⋄⋄••⋄•│
-│000030c0│ 00 ca 4f f9 85 01 8f 8e ┊ 03 33 66 02 02 00 03 00 │⋄×O×ו××┊•3f••⋄•⋄│
-│000030d0│ 07 18 01 09 00 01 0a 00 ┊ 09 85 01 c1 8e 03 66 02 │•••_⋄•_⋄┊_ו×וf•│
+│000030c0│ 00 ca 4f f9 86 01 8f 8e ┊ 03 33 66 02 02 00 03 00 │⋄×O×ו××┊•3f••⋄•⋄│
+│000030d0│ 07 18 01 09 00 01 0a 00 ┊ 09 86 01 c1 8e 03 66 02 │•••_⋄•_⋄┊_ו×וf•│
│000030e0│ 02 05 0a 0f 0b 02 05 09 ┊ 0f 7c 00 00 06 00 0a 03 │••_••••_┊•|⋄⋄•⋄_•│
│000030f0│ 00 01 0f 86 00 01 01 06 ┊ 00 01 00 02 02 01 00 01 │⋄••×⋄•••┊⋄•⋄•••⋄•│
│00003100│ 0f 94 01 00 00 00 00 00 ┊ 03 00 01 08 15 bb 07 00 │•ו⋄⋄⋄⋄⋄┊•⋄•••ו⋄│
@@ -793,7 +793,7 @@
│00003170│ 03 05 0c 0f 9b 03 05 0b ┊ 01 0f 6e 00 00 03 00 00 │••_•ו••┊••n⋄⋄•⋄⋄│
│00003180│ 01 00 15 a0 07 00 00 04 ┊ 00 02 02 01 00 00 34 00 │•⋄•ו⋄⋄•┊⋄•••⋄⋄4⋄│
│00003190│ 04 18 15 9d 07 00 00 00 ┊ 00 17 2d 01 01 00 00 00 │•••ו⋄⋄⋄┊⋄•-••⋄⋄⋄│
-│000031a0│ f9 85 01 d3 8d 03 81 01 ┊ 66 02 01 00 01 00 02 f5 │×ו×וו┊f••⋄•⋄•×│
+│000031a0│ f9 86 01 d3 8d 03 81 01 ┊ 66 02 01 00 01 00 02 f5 │×ו×וו┊f••⋄•⋄•×│
│000031b0│ 8d 02 00 17 47 01 00 0f ┊ 17 01 00 01 00 02 f5 8d │ו⋄•G•⋄•┊••⋄•⋄•××│
│000031c0│ 02 00 0f ad 00 00 0d 01 ┊ ac 4c 00 0f 3a 00 00 8c │•⋄•×⋄⋄_•┊×L⋄•:⋄⋄×│
│000031d0│ 4f 00 0f 41 00 00 8c 4f ┊ 00 0f 48 00 01 ac 4c 00 │O⋄•A⋄⋄×O┊⋄•H⋄•×L⋄│
@@ -804,8 +804,8 @@
│00003220│ 01 03 01 ca 4f 00 17 7d ┊ 01 03 00 02 00 07 02 1b │•••×O⋄•}┊••⋄•⋄•••│
│00003230│ 0f 9f 00 00 00 01 00 01 ┊ 01 00 02 87 0a 0f ac 00 │•×⋄⋄⋄•⋄•┊•⋄•×_•×⋄│
│00003240│ 00 00 02 00 01 02 00 02 ┊ f9 03 17 de 01 01 00 00 │⋄⋄•⋄••⋄•┊ו•ו•⋄⋄│
-│00003250│ 03 00 00 02 1b 29 85 01 ┊ da 8d 03 66 02 02 00 00 │•⋄⋄••)ו┊×וf••⋄⋄│
-│00003260│ 04 00 01 01 00 02 87 0a ┊ 29 85 01 e1 8d 03 66 02 │•⋄••⋄•×_┊)ו×וf•│
+│00003250│ 03 00 00 02 1b 29 86 01 ┊ da 8d 03 66 02 02 00 00 │•⋄⋄••)ו┊×וf••⋄⋄│
+│00003260│ 04 00 01 01 00 02 87 0a ┊ 29 86 01 e1 8d 03 66 02 │•⋄••⋄•×_┊)ו×וf•│
│00003270│ 02 00 00 05 00 01 02 00 ┊ 02 1b 0f 25 03 00 00 07 │•⋄⋄•⋄••⋄┊•••%•⋄⋄•│
│00003280│ 00 01 01 00 02 87 0a 0f ┊ 1f 03 00 00 08 00 01 02 │⋄••⋄•×_•┊••⋄⋄•⋄••│
│00003290│ 00 17 00 01 01 00 01 00 ┊ 00 00 00 00 00 00 00 01 │⋄•⋄••⋄•⋄┊⋄⋄⋄⋄⋄⋄⋄•│
@@ -2218,18 +2218,18 @@
│00008a80│ 01 00 ca 4f 0f 19 00 00 ┊ 04 00 02 06 00 02 03 00 │•⋄×O••⋄⋄┊•⋄••⋄••⋄│
│00008a90│ 05 01 01 ec 14 00 01 00 ┊ ca 4f 0f 2f 01 04 05 0f │•••ו⋄•⋄┊×O•/••••│
│00008aa0│ 34 01 00 05 00 02 06 00 ┊ 03 00 0f 3f 01 04 06 0f │4•⋄•⋄••⋄┊•⋄•?••••│
-│00008ab0│ 44 01 00 06 00 02 06 00 ┊ 04 00 39 8b 01 e7 f4 02 │D•⋄•⋄••⋄┊•⋄9ו×ו│
+│00008ab0│ 44 01 00 06 00 02 06 00 ┊ 04 00 39 89 01 e7 f4 02 │D•⋄•⋄••⋄┊•⋄9ו×ו│
│00008ac0│ 66 02 03 04 07 0f 0b 03 ┊ 00 07 00 00 00 01 03 00 │f•••••••┊⋄•⋄⋄⋄••⋄│
-│00008ad0│ 05 01 01 ec 14 00 01 00 ┊ ca 4f 41 8b 01 f2 f4 02 │•••ו⋄•⋄┊×OAו×ו│
+│00008ad0│ 05 01 01 ec 14 00 01 00 ┊ ca 4f 41 89 01 f2 f4 02 │•••ו⋄•⋄┊×OAו×ו│
│00008ae0│ 66 02 03 04 08 0f 0b 03 ┊ 00 08 00 00 00 02 03 00 │f•••••••┊⋄•⋄⋄⋄••⋄│
-│00008af0│ 05 01 01 ec 14 00 01 00 ┊ ca 4f 99 8b 01 e7 f4 02 │•••ו⋄•⋄┊×O×ו×ו│
-│00008b00│ 66 02 03 00 00 00 07 12 ┊ 01 07 00 01 08 00 09 8b │f••⋄⋄⋄••┊••⋄••⋄_×│
+│00008af0│ 05 01 01 ec 14 00 01 00 ┊ ca 4f 99 89 01 e7 f4 02 │•••ו⋄•⋄┊×O×ו×ו│
+│00008b00│ 66 02 03 00 00 00 07 12 ┊ 01 07 00 01 08 00 09 89 │f••⋄⋄⋄••┊••⋄••⋄_×│
│00008b10│ 01 f9 f4 02 66 02 03 05 ┊ 08 0f 0b 03 05 07 08 4f │•×וf•••┊•••••••O│
│00008b20│ fd 1a 00 01 05 06 0f 08 ┊ 01 05 05 01 00 4f c6 1a │ו⋄•••••┊••••⋄Oו│
│00008b30│ 00 00 04 00 02 02 01 00 ┊ 00 85 02 00 04 48 4f bd │⋄⋄•⋄•••⋄┊⋄ו⋄•HO×│
│00008b40│ 1a 00 00 00 00 0f 08 01 ┊ 00 00 00 fd 96 9f 03 55 │•⋄⋄⋄⋄•••┊⋄⋄⋄××וU│
│00008b50│ 66 02 01 01 01 00 02 ec ┊ 17 04 00 06 00 06 01 ca │f••••⋄•×┊••⋄•⋄••×│
-│00008b60│ 4f 01 ca 4f 0f f9 00 f9 ┊ 8b 01 c4 f4 02 38 66 02 │O•×O•×⋄×┊ו×ו8f•│
+│00008b60│ 4f 01 ca 4f 0f f9 00 f9 ┊ 89 01 c4 f4 02 38 66 02 │O•×O•×⋄×┊ו×ו8f•│
│00008b70│ 01 02 01 00 02 81 8d 02 ┊ 00 e5 c5 9f 03 66 02 01 │•••⋄•×ו┊⋄××וf••│
│00008b80│ 02 00 09 01 00 00 0f 49 ┊ 00 00 fe 8d 02 00 0f 51 │•⋄_•⋄⋄•I┊⋄⋄×ו⋄•Q│
│00008b90│ 00 00 fe 8d 02 00 0f 59 ┊ 00 00 d0 51 00 17 32 01 │⋄⋄×ו⋄•Y┊⋄⋄×Q⋄•2•│
@@ -2241,8 +2241,8 @@
│00008bf0│ 00 00 00 08 5f 5f 61 72 ┊ 67 31 5f 30 c1 17 92 01 │⋄⋄⋄•__ar┊g1_0וו│
│00008c00│ 01 00 00 04 00 00 02 1b ┊ 2d 9c 9f 03 66 02 02 00 │•⋄⋄•⋄⋄••┊-×וf••⋄│
│00008c10│ 00 05 00 01 01 00 02 87 ┊ 0a 2d a3 9f 03 66 02 02 │⋄•⋄••⋄•×┊_-×וf••│
-│00008c20│ 00 00 06 00 01 02 00 02 ┊ 1b 29 8b 01 ca f4 02 66 │⋄⋄•⋄••⋄•┊•)ו×וf│
-│00008c30│ 02 03 00 00 03 00 01 01 ┊ 00 02 87 0a 29 8b 01 d1 │••⋄⋄•⋄••┊⋄•×_)ו×│
+│00008c20│ 00 00 06 00 01 02 00 02 ┊ 1b 29 89 01 ca f4 02 66 │⋄⋄•⋄••⋄•┊•)ו×וf│
+│00008c30│ 02 03 00 00 03 00 01 01 ┊ 00 02 87 0a 29 89 01 d1 │••⋄⋄•⋄••┊⋄•×_)ו×│
│00008c40│ f4 02 66 02 03 00 00 04 ┊ 00 01 02 00 17 0f 01 01 │וf••⋄⋄•┊⋄••⋄••••│
│00008c50│ 00 01 00 00 00 00 00 00 ┊ 00 00 01 00 01 00 50 ca │⋄•⋄⋄⋄⋄⋄⋄┊⋄⋄•⋄•⋄P×│
│00008c60│ 1a 00 00 04 00 02 02 01 ┊ 00 00 87 02 00 02 10 50 │•⋄⋄•⋄•••┊⋄⋄ו⋄••P│
@@ -2518,7 +2518,7 @@
│00009d40│ 03 00 05 01 01 c1 18 03 ┊ 01 00 d6 b0 02 78 68 a0 │•⋄•••ו•┊•⋄×וxh×│
│00009d50│ 1f 00 01 00 08 00 02 06 ┊ 00 01 03 00 05 01 01 c1 │•⋄•⋄•⋄••┊⋄••⋄•••×│
│00009d60│ 18 03 01 01 b7 b1 02 0f ┊ 34 03 04 0c 0f 39 03 04 │••••×ו•┊4••_•9••│
-│00009d70│ 0d 21 83 01 83 ae 01 78 ┊ 02 05 00 0a 00 0a 01 03 │_!ו×וx┊••⋄_⋄_••│
+│00009d70│ 0d 21 82 01 83 ae 01 78 ┊ 02 05 00 0a 00 0a 01 03 │_!ו×וx┊••⋄_⋄_••│
│00009d80│ 00 05 01 01 c1 18 03 01 ┊ 00 d6 b0 02 0f 1b 06 00 │⋄•••ו••┊⋄×ו•••⋄│
│00009d90│ 0c 00 02 06 00 0a 00 01 ┊ 13 54 29 08 07 02 17 3a │_⋄••⋄_⋄•┊•T)••••:│
│00009da0│ 09 00 02 01 94 a7 02 02 ┊ 00 02 00 13 76 29 00 0a │_⋄••×ו•┊⋄•⋄•v)⋄_│
@@ -2530,7 +2530,7 @@
│00009e00│ 01 ea 53 02 01 02 00 13 ┊ 76 29 01 0f 00 13 38 29 │•×S•••⋄•┊v)••⋄•8)│
│00009e10│ 00 00 01 0a 00 03 13 8a ┊ 29 00 00 01 00 68 ae 1e │⋄⋄•_⋄••×┊)⋄⋄•⋄hו│
│00009e20│ 00 00 04 00 00 01 17 33 ┊ 01 01 01 00 03 00 02 02 │⋄⋄•⋄⋄••3┊•••⋄•⋄••│
-│00009e30│ 03 03 02 01 03 00 03 09 ┊ 83 01 e5 b1 01 78 02 06 │•••••⋄•_┊ו×וx••│
+│00009e30│ 03 03 02 01 03 00 03 09 ┊ 82 01 e5 b1 01 78 02 06 │•••••⋄•_┊ו×וx••│
│00009e40│ 00 0d 00 02 06 00 01 05 ┊ 00 05 01 01 c1 18 03 01 │⋄_⋄••⋄••┊⋄•••ו••│
│00009e50│ 00 d6 b0 02 05 01 02 80 ┊ 02 01 01 00 ca 4f 13 38 │⋄×ו•••×┊•••⋄×O•8│
│00009e60│ 29 09 04 0e 13 38 29 09 ┊ 00 0e 00 00 00 01 05 00 │)_•••8)_┊⋄•⋄⋄⋄••⋄│
@@ -2546,7 +2546,7 @@
│00009f00│ 13 38 29 11 00 01 0d 00 ┊ 03 13 8a 29 00 00 01 17 │•8)•⋄•_⋄┊••×)⋄⋄••│
│00009f10│ 9e 01 06 01 00 0a 00 02 ┊ 01 00 03 05 06 08 00 00 │ו••⋄_⋄•┊•⋄••••⋄⋄│
│00009f20│ 01 17 b0 01 05 05 00 01 ┊ 13 95 29 09 05 0e 01 01 │••ו••⋄•┊•×)_••••│
-│00009f30│ 86 01 ff cf 01 97 01 02 ┊ 00 00 06 00 01 13 95 29 │ו×וו•┊⋄⋄•⋄••×)│
+│00009f30│ 87 01 ff cf 01 97 01 02 ┊ 00 00 06 00 01 13 95 29 │ו×וו•┊⋄⋄•⋄••×)│
│00009f40│ 0a 05 0f 01 08 68 ad 1e ┊ 00 01 00 03 00 03 17 17 │_••••hו┊⋄•⋄•⋄•••│
│00009f50│ 01 0c 00 13 00 02 06 00 ┊ 01 05 00 05 01 01 c1 18 │•_⋄•⋄••⋄┊••⋄•••ו│
│00009f60│ 03 01 01 b7 b1 02 05 01 ┊ 02 80 02 01 01 00 02 03 │•••×ו••┊•ו••⋄••│
@@ -3110,7 +3110,7 @@
│0000c240│ 03 01 00 b7 b1 02 01 0f ┊ 18 04 00 06 00 05 0d ea │••⋄×ו••┊••⋄•⋄•_×│
│0000c250│ 94 04 78 02 04 00 0a 00 ┊ 02 06 00 01 05 00 05 01 │וx••⋄_⋄┊••⋄••⋄••│
│0000c260│ 01 95 1a 02 01 00 b7 b1 ┊ 02 05 01 02 80 02 01 01 │•ו••⋄××┊••••ו••│
-│0000c270│ 00 02 03 4d f5 94 04 78 ┊ 02 05 04 0b 29 87 01 8c │⋄••M×וx┊••••)ו×│
+│0000c270│ 00 02 03 4d f5 94 04 78 ┊ 02 05 04 0b 29 85 01 8c │⋄••M×וx┊••••)ו×│
│0000c280│ 75 9b 01 02 06 00 0b 00 ┊ 00 00 01 05 00 05 01 01 │uו••⋄•⋄┊⋄⋄••⋄•••│
│0000c290│ 95 1a 02 01 00 b7 b1 02 ┊ 05 01 02 80 02 01 01 00 │ו••⋄×ו┊•••ו••⋄│
│0000c2a0│ 02 03 7d f0 94 04 78 02 ┊ 05 00 08 00 0b 02 02 d3 │••}×וx•┊•⋄•⋄•••×│
@@ -3120,7 +3120,7 @@
│0000c2e0│ 17 22 02 01 00 00 00 17 ┊ 29 02 01 00 00 00 17 30 │•"••⋄⋄⋄•┊)••⋄⋄⋄•0│
│0000c2f0│ 02 01 00 00 00 fd aa 94 ┊ 04 80 01 78 02 01 03 01 │••⋄⋄⋄×××┊•וx••••│
│0000c300│ 00 02 a2 49 01 01 02 03 ┊ 17 23 02 00 d5 e5 94 04 │⋄•×I••••┊•#•⋄××ו│
-│0000c310│ 78 02 01 04 00 01 04 f9 ┊ 87 01 da 74 4d 9b 01 02 │x•••⋄••×┊ו×tMו•│
+│0000c310│ 78 02 01 04 00 01 04 f9 ┊ 85 01 da 74 4d 9b 01 02 │x•••⋄••×┊ו×tMו•│
│0000c320│ 01 05 01 00 02 c5 8b 02 ┊ 00 3d f7 94 04 78 02 01 │•••⋄•×ו┊⋄=×וx••│
│0000c330│ 04 00 0c 01 87 3a 00 17 ┊ 79 02 00 00 0c 06 87 3a │•⋄_•×:⋄•┊y•⋄⋄_•×:│
│0000c340│ 00 00 17 84 02 00 01 02 ┊ 00 00 17 8c 02 00 00 d0 │⋄⋄•ו⋄••┊⋄⋄•ו⋄⋄×│
@@ -3134,7 +3134,7 @@
│0000c3c0│ 83 02 02 00 00 05 00 00 ┊ 01 a6 8d 02 17 e7 02 03 │ו•⋄⋄•⋄⋄┊•×ו•ו•│
│0000c3d0│ 00 00 07 00 00 02 1b 2d ┊ b3 94 04 78 02 04 00 00 │⋄⋄•⋄⋄••-┊×וx••⋄⋄│
│0000c3e0│ 07 00 01 01 00 01 93 09 ┊ 17 9a 01 05 00 00 0a 00 │•⋄••⋄•×_┊•ו•⋄⋄_⋄│
-│0000c3f0│ 00 02 1b 29 87 01 e3 74 ┊ 9b 01 02 06 00 00 0a 00 │⋄••)ו×t┊ו••⋄⋄_⋄│
+│0000c3f0│ 00 02 1b 29 85 01 e3 74 ┊ 9b 01 02 06 00 00 0a 00 │⋄••)ו×t┊ו••⋄⋄_⋄│
│0000c400│ 01 01 00 17 45 03 01 00 ┊ 01 02 00 0d 02 b9 16 01 │••⋄•E••⋄┊••⋄_•ו•│
│0000c410│ 01 00 17 d6 02 00 ab be ┊ 01 17 7f 02 00 00 00 00 │•⋄•ו⋄××┊••••⋄⋄⋄⋄│
│0000c420│ 00 00 00 09 01 28 6e a5 ┊ 20 00 00 00 03 00 0a 01 │⋄⋄⋄_•(n×┊ ⋄⋄⋄•⋄_•│
@@ -3487,7 +3487,7 @@
│0000d9d0│ 55 f7 85 03 66 02 01 0a ┊ fd d2 21 e2 02 87 01 02 │U×וf••_┊××!וו•│
│0000d9e0│ 01 0b 01 00 02 8a 0a 00 ┊ bd 87 95 03 66 02 01 0b │•••⋄•×_⋄┊××וf•••│
│0000d9f0│ fd d6 29 8b 03 84 01 02 ┊ 01 0c 01 00 02 95 16 00 │××)וו•┊•_•⋄•ו⋄│
-│0000da00│ 39 8a 01 87 24 84 01 02 ┊ 01 0c fd 9a 2a 56 84 01 │9ו×$ו•┊•_××*Vו│
+│0000da00│ 39 8b 01 87 24 84 01 02 ┊ 01 0c fd 9a 2a 56 84 01 │9ו×$ו•┊•_××*Vו│
│0000da10│ 02 01 0d 01 00 02 96 16 ┊ 00 fd b7 2c 28 84 01 02 │••_•⋄•ו┊⋄××,(ו•│
│0000da20│ 01 0d 13 ff 2d 01 03 01 ┊ 00 02 ef 8d 02 00 17 e9 │•_•×-•••┊⋄•×ו⋄•×│
│0000da30│ 03 00 00 1f 01 e9 4b 00 ┊ 17 f8 05 00 00 e2 84 03 │•⋄⋄••×K⋄┊•ו⋄⋄×ו│
@@ -3528,7 +3528,7 @@
│0000dc60│ 0b 00 00 08 00 01 02 00 ┊ 02 1b 13 c6 2e 0f 00 00 │•⋄⋄•⋄••⋄┊•••×.•⋄⋄│
│0000dc70│ 05 00 01 01 00 02 87 0a ┊ 13 d9 2e 0f 00 00 06 00 │•⋄••⋄•×_┊•×.•⋄⋄•⋄│
│0000dc80│ 01 02 00 17 43 08 01 00 ┊ 01 01 00 0d 02 bb 88 02 │••⋄•C••⋄┊••⋄_•×ו│
-│0000dc90│ 00 e9 8a 01 a1 24 84 01 ┊ 02 00 00 00 00 00 00 00 │⋄×ו×$ו┊•⋄⋄⋄⋄⋄⋄⋄│
+│0000dc90│ 00 e9 8b 01 a1 24 84 01 ┊ 02 00 00 00 00 00 00 00 │⋄×ו×$ו┊•⋄⋄⋄⋄⋄⋄⋄│
│0000dca0│ 13 06 18 73 cd 20 00 00 ┊ 00 03 00 0a 01 01 00 0f │•••s× ⋄⋄┊⋄•⋄_••⋄•│
│0000dcb0│ 0d 01 00 04 00 0a 02 01 ┊ 00 0f 17 02 00 06 00 02 │_•⋄•⋄_••┊⋄•••⋄•⋄•│
│0000dcc0│ 06 00 03 00 0f 22 02 00 ┊ 07 00 02 06 00 04 00 13 │•⋄•⋄•"•⋄┊•⋄••⋄•⋄•│
@@ -8437,46 +8437,46 @@
│00021ec0│ 3f 02 00 2d 2d 02 ca 04 ┊ 00 78 02 00 64 64 02 ca │?•⋄--•ו┊⋄x•⋄dd•×│
│00021ed0│ 04 00 33 02 00 21 21 02 ┊ ca 04 00 6a 02 00 58 58 │•⋄3•⋄!!•┊ו⋄j•⋄XX│
│00021ee0│ 02 ca 04 00 27 02 00 15 ┊ 15 02 ca 04 00 5e 02 00 │•ו⋄'•⋄•┊••ו⋄^•⋄│
-│00021ef0│ 4c 4c 02 ca 04 00 1b 02 ┊ 00 09 09 02 ca 04 02 b3 │LL•ו⋄••┊⋄__•ו•×│
-│00021f00│ a3 01 02 00 83 01 83 01 ┊ 02 02 00 52 02 00 40 40 │ו•⋄וו┊••⋄R•⋄@@│
-│00021f10│ 02 ca 04 00 8c 01 02 00 ┊ 77 77 02 ca 04 00 46 02 │•ו⋄ו•⋄┊ww•ו⋄F•│
-│00021f20│ 00 34 34 02 ca 04 00 7f ┊ 02 00 6b 6b 02 ca 04 00 │⋄44•ו⋄•┊•⋄kk•ו⋄│
-│00021f30│ 3a 02 00 28 28 02 ca 04 ┊ 00 72 02 00 5f 5f 02 ca │:•⋄((•ו┊⋄r•⋄__•×│
-│00021f40│ 04 00 2e 02 00 1c 1c 02 ┊ ca 04 00 65 02 00 53 53 │•⋄.•⋄•••┊ו⋄e•⋄SS│
-│00021f50│ 02 ca 04 00 22 02 00 10 ┊ 10 02 ca 04 02 e9 01 01 │•ו⋄"•⋄•┊••ו•ו•│
-│00021f60│ 00 00 8a 01 02 02 00 59 ┊ 02 00 47 47 02 ca 04 00 │⋄⋄ו••⋄Y┊•⋄GG•ו⋄│
-│00021f70│ 16 02 00 04 04 02 ca 04 ┊ 00 93 01 02 00 7e 7e 02 │••⋄•••ו┊⋄ו•⋄~~•│
-│00021f80│ ca 04 00 4d 02 00 3b 3b ┊ 02 ca 04 00 87 01 02 00 │ו⋄M•⋄;;┊•ו⋄ו•⋄│
-│00021f90│ 72 72 02 ca 04 00 41 02 ┊ 00 2f 2f 02 ca 04 00 7a │rr•ו⋄A•┊⋄//•ו⋄z│
-│00021fa0│ 02 00 66 66 02 ca 04 00 ┊ 35 02 00 23 23 02 ca 04 │•⋄ff•ו⋄┊5•⋄##•ו│
-│00021fb0│ 00 6c 02 00 5a 5a 02 ca ┊ 04 00 29 02 00 17 17 02 │⋄l•⋄ZZ•×┊•⋄)•⋄•••│
-│00021fc0│ ca 04 00 60 02 00 4e 4e ┊ 02 ca 04 00 1d 02 00 0b │ו⋄`•⋄NN┊•ו⋄••⋄•│
-│00021fd0│ 0b 02 ca 04 02 da 02 01 ┊ 00 00 85 01 02 02 00 54 │••ו•ו•┊⋄⋄ו••⋄T│
-│00021fe0│ 02 00 42 42 02 ca 04 00 ┊ 8e 01 02 00 79 79 02 ca │•⋄BB•ו⋄┊ו•⋄yy•×│
-│00021ff0│ 04 00 48 02 00 36 36 02 ┊ ca 04 00 82 01 02 00 6d │•⋄H•⋄66•┊ו⋄ו•⋄m│
-│00022000│ 6d 02 ca 04 00 3c 02 00 ┊ 2a 2a 02 ca 04 00 74 02 │m•ו⋄<•⋄┊**•ו⋄t•│
-│00022010│ 00 61 61 02 ca 04 00 30 ┊ 02 00 1e 1e 02 ca 04 00 │⋄aa•ו⋄0┊•⋄•••ו⋄│
-│00022020│ 67 02 00 55 55 02 ca 04 ┊ 00 24 02 00 12 12 02 ca │g•⋄UU•ו┊⋄$•⋄•••×│
-│00022030│ 04 00 5b 02 00 49 49 02 ┊ ca 04 00 18 02 00 06 06 │•⋄[•⋄II•┊ו⋄••⋄••│
-│00022040│ 02 ca 04 00 4f 02 00 3d ┊ 3d 02 ca 04 00 89 01 02 │•ו⋄O•⋄=┊=•ו⋄ו•│
-│00022050│ 00 74 74 02 ca 04 00 43 ┊ 02 00 31 31 02 ca 04 00 │⋄tt•ו⋄C┊•⋄11•ו⋄│
-│00022060│ 7c 02 00 68 68 02 ca 04 ┊ 00 37 02 00 25 25 02 ca │|•⋄hh•ו┊⋄7•⋄%%•×│
-│00022070│ 04 00 6f 02 00 5c 5c 02 ┊ ca 04 00 2b 02 00 19 19 │•⋄o•⋄\\•┊ו⋄+•⋄••│
-│00022080│ 02 ca 04 00 62 02 00 50 ┊ 50 02 ca 04 00 1f 02 00 │•ו⋄b•⋄P┊P•ו⋄••⋄│
-│00022090│ 0d 0d 02 ca 04 02 c3 02 ┊ 01 00 00 87 01 02 02 00 │__•ו•ו┊•⋄⋄ו••⋄│
-│000220a0│ 56 02 00 44 44 02 ca 04 ┊ 00 01 02 00 01 01 02 08 │V•⋄DD•ו┊⋄••⋄••••│
-│000220b0│ 00 90 01 02 00 7b 7b 02 ┊ ca 04 00 4a 02 00 38 38 │⋄ו•⋄{{•┊ו⋄J•⋄88│
-│000220c0│ 02 ca 04 00 84 01 02 00 ┊ 6f 6f 02 ca 04 00 3e 02 │•ו⋄ו•⋄┊oo•ו⋄>•│
-│000220d0│ 00 2c 2c 02 ca 04 00 76 ┊ 02 00 63 63 02 ca 04 00 │⋄,,•ו⋄v┊•⋄cc•ו⋄│
-│000220e0│ 32 02 00 20 20 02 ca 04 ┊ 00 69 02 00 57 57 02 ca │2•⋄ •ו┊⋄i•⋄WW•×│
-│000220f0│ 04 00 26 02 00 14 14 02 ┊ ca 04 00 5d 02 00 4b 4b │•⋄&•⋄•••┊ו⋄]•⋄KK│
-│00022100│ 02 ca 04 00 1a 02 00 08 ┊ 08 02 ca 04 00 51 02 00 │•ו⋄••⋄•┊••ו⋄Q•⋄│
+│00021ef0│ 4c 4c 02 ca 04 00 1b 02 ┊ 00 09 09 02 ca 04 00 52 │LL•ו⋄••┊⋄__•ו⋄R│
+│00021f00│ 02 00 40 40 02 ca 04 00 ┊ 8c 01 02 00 77 77 02 ca │•⋄@@•ו⋄┊ו•⋄ww•×│
+│00021f10│ 04 00 46 02 00 34 34 02 ┊ ca 04 00 7f 02 00 6b 6b │•⋄F•⋄44•┊ו⋄••⋄kk│
+│00021f20│ 02 ca 04 00 3a 02 00 28 ┊ 28 02 ca 04 00 72 02 00 │•ו⋄:•⋄(┊(•ו⋄r•⋄│
+│00021f30│ 5f 5f 02 ca 04 00 2e 02 ┊ 00 1c 1c 02 ca 04 00 65 │__•ו⋄.•┊⋄•••ו⋄e│
+│00021f40│ 02 00 53 53 02 ca 04 00 ┊ 22 02 00 10 10 02 ca 04 │•⋄SS•ו⋄┊"•⋄•••ו│
+│00021f50│ 02 94 04 01 00 00 8a 01 ┊ 02 02 00 59 02 00 47 47 │•ו•⋄⋄ו┊••⋄Y•⋄GG│
+│00021f60│ 02 ca 04 00 16 02 00 04 ┊ 04 02 ca 04 00 93 01 02 │•ו⋄••⋄•┊••ו⋄ו•│
+│00021f70│ 00 7e 7e 02 ca 04 00 4d ┊ 02 00 3b 3b 02 ca 04 00 │⋄~~•ו⋄M┊•⋄;;•ו⋄│
+│00021f80│ 87 01 02 00 72 72 02 ca ┊ 04 00 41 02 00 2f 2f 02 │ו•⋄rr•×┊•⋄A•⋄//•│
+│00021f90│ ca 04 00 7a 02 00 66 66 ┊ 02 ca 04 00 35 02 00 23 │ו⋄z•⋄ff┊•ו⋄5•⋄#│
+│00021fa0│ 23 02 ca 04 00 6c 02 00 ┊ 5a 5a 02 ca 04 00 29 02 │#•ו⋄l•⋄┊ZZ•ו⋄)•│
+│00021fb0│ 00 17 17 02 ca 04 00 60 ┊ 02 00 4e 4e 02 ca 04 00 │⋄•••ו⋄`┊•⋄NN•ו⋄│
+│00021fc0│ 1d 02 00 0b 0b 02 ca 04 ┊ 02 c3 02 01 00 00 85 01 │••⋄•••ו┊•ו•⋄⋄ו│
+│00021fd0│ 02 02 00 54 02 00 42 42 ┊ 02 ca 04 00 8e 01 02 00 │••⋄T•⋄BB┊•ו⋄ו•⋄│
+│00021fe0│ 79 79 02 ca 04 00 48 02 ┊ 00 36 36 02 ca 04 00 82 │yy•ו⋄H•┊⋄66•ו⋄×│
+│00021ff0│ 01 02 00 6d 6d 02 ca 04 ┊ 00 3c 02 00 2a 2a 02 ca │••⋄mm•ו┊⋄<•⋄**•×│
+│00022000│ 04 00 74 02 00 61 61 02 ┊ ca 04 00 30 02 00 1e 1e │•⋄t•⋄aa•┊ו⋄0•⋄••│
+│00022010│ 02 ca 04 00 67 02 00 55 ┊ 55 02 ca 04 00 24 02 00 │•ו⋄g•⋄U┊U•ו⋄$•⋄│
+│00022020│ 12 12 02 ca 04 00 5b 02 ┊ 00 49 49 02 ca 04 00 18 │•••ו⋄[•┊⋄II•ו⋄•│
+│00022030│ 02 00 06 06 02 ca 04 00 ┊ 4f 02 00 3d 3d 02 ca 04 │•⋄•••ו⋄┊O•⋄==•ו│
+│00022040│ 00 89 01 02 00 74 74 02 ┊ ca 04 00 43 02 00 31 31 │⋄ו•⋄tt•┊ו⋄C•⋄11│
+│00022050│ 02 ca 04 00 7c 02 00 68 ┊ 68 02 ca 04 00 37 02 00 │•ו⋄|•⋄h┊h•ו⋄7•⋄│
+│00022060│ 25 25 02 ca 04 00 6f 02 ┊ 00 5c 5c 02 ca 04 00 2b │%%•ו⋄o•┊⋄\\•ו⋄+│
+│00022070│ 02 00 19 19 02 ca 04 00 ┊ 62 02 00 50 50 02 ca 04 │•⋄•••ו⋄┊b•⋄PP•ו│
+│00022080│ 00 1f 02 00 0d 0d 02 ca ┊ 04 02 ef 07 01 00 00 87 │⋄••⋄__•×┊••ו•⋄⋄×│
+│00022090│ 01 02 02 00 56 02 00 44 ┊ 44 02 ca 04 00 01 02 00 │•••⋄V•⋄D┊D•ו⋄••⋄│
+│000220a0│ 01 01 02 08 00 90 01 02 ┊ 00 7b 7b 02 ca 04 00 4a │••••⋄ו•┊⋄{{•ו⋄J│
+│000220b0│ 02 00 38 38 02 ca 04 00 ┊ 84 01 02 00 6f 6f 02 ca │•⋄88•ו⋄┊ו•⋄oo•×│
+│000220c0│ 04 00 3e 02 00 2c 2c 02 ┊ ca 04 00 76 02 00 63 63 │•⋄>•⋄,,•┊ו⋄v•⋄cc│
+│000220d0│ 02 ca 04 00 32 02 00 20 ┊ 20 02 ca 04 00 69 02 00 │•ו⋄2•⋄ ┊ •ו⋄i•⋄│
+│000220e0│ 57 57 02 ca 04 00 26 02 ┊ 00 14 14 02 ca 04 00 5d │WW•ו⋄&•┊⋄•••ו⋄]│
+│000220f0│ 02 00 4b 4b 02 ca 04 00 ┊ 1a 02 00 08 08 02 ca 04 │•⋄KK•ו⋄┊••⋄•••ו│
+│00022100│ 02 b3 a3 01 02 00 82 01 ┊ 82 01 02 02 00 51 02 00 │•×ו•⋄ו┊ו••⋄Q•⋄│
│00022110│ 3f 3f 02 ca 04 00 8b 01 ┊ 02 00 76 76 02 ca 04 00 │??•ו⋄ו┊•⋄vv•ו⋄│
│00022120│ 45 02 00 33 33 02 ca 04 ┊ 00 7e 02 00 6a 6a 02 ca │E•⋄33•ו┊⋄~•⋄jj•×│
│00022130│ 04 00 39 02 00 27 27 02 ┊ ca 04 00 71 02 00 5e 5e │•⋄9•⋄''•┊ו⋄q•⋄^^│
│00022140│ 02 ca 04 00 2d 02 00 1b ┊ 1b 02 ca 04 00 64 02 00 │•ו⋄-•⋄•┊••ו⋄d•⋄│
-│00022150│ 52 52 02 ca 04 00 21 02 ┊ 00 0f 0f 02 ca 04 02 94 │RR•ו⋄!•┊⋄•••ו•×│
-│00022160│ 04 01 00 00 89 01 02 02 ┊ 00 58 02 00 46 46 02 ca │••⋄⋄ו••┊⋄X•⋄FF•×│
+│00022150│ 52 52 02 ca 04 00 21 02 ┊ 00 0f 0f 02 ca 04 02 d6 │RR•ו⋄!•┊⋄•••ו•×│
+│00022160│ 02 01 00 00 89 01 02 02 ┊ 00 58 02 00 46 46 02 ca │••⋄⋄ו••┊⋄X•⋄FF•×│
│00022170│ 04 00 92 01 02 00 7d 7d ┊ 02 ca 04 00 4c 02 00 3a │•⋄ו•⋄}}┊•ו⋄L•⋄:│
│00022180│ 3a 02 ca 04 00 86 01 02 ┊ 00 71 71 02 ca 04 00 40 │:•ו⋄ו•┊⋄qq•ו⋄@│
│00022190│ 02 00 2e 2e 02 ca 04 00 ┊ 79 02 00 65 65 02 ca 04 │•⋄..•ו⋄┊y•⋄ee•ו│
@@ -8488,14 +8488,14 @@
│000221f0│ 02 ca 04 00 3b 02 00 29 ┊ 29 02 ca 04 00 73 02 00 │•ו⋄;•⋄)┊)•ו⋄s•⋄│
│00022200│ 60 60 02 ca 04 00 2f 02 ┊ 00 1d 1d 02 ca 04 00 66 │``•ו⋄/•┊⋄•••ו⋄f│
│00022210│ 02 00 54 54 02 ca 04 00 ┊ 23 02 00 11 11 02 ca 04 │•⋄TT•ו⋄┊#•⋄•••ו│
-│00022220│ 02 d6 02 01 00 00 8b 01 ┊ 02 02 00 5a 02 00 48 48 │•ו•⋄⋄ו┊••⋄Z•⋄HH│
+│00022220│ 02 e9 01 01 00 00 8b 01 ┊ 02 02 00 5a 02 00 48 48 │•ו•⋄⋄ו┊••⋄Z•⋄HH│
│00022230│ 02 ca 04 00 17 02 00 05 ┊ 05 02 ca 04 00 4e 02 00 │•ו⋄••⋄•┊••ו⋄N•⋄│
│00022240│ 3c 3c 02 ca 04 00 88 01 ┊ 02 00 73 73 02 ca 04 00 │<<•ו⋄ו┊•⋄ss•ו⋄│
│00022250│ 42 02 00 30 30 02 ca 04 ┊ 00 7b 02 00 67 67 02 ca │B•⋄00•ו┊⋄{•⋄gg•×│
│00022260│ 04 00 36 02 00 24 24 02 ┊ ca 04 00 6e 02 00 5b 5b │•⋄6•⋄$$•┊ו⋄n•⋄[[│
│00022270│ 02 ca 04 00 2a 02 00 18 ┊ 18 02 ca 04 00 61 02 00 │•ו⋄*•⋄•┊••ו⋄a•⋄│
-│00022280│ 4f 4f 02 ca 04 00 1e 02 ┊ 00 0c 0c 02 ca 04 02 ef │OO•ו⋄••┊⋄__•ו•×│
-│00022290│ 07 01 00 00 86 01 02 02 ┊ 00 55 02 00 43 43 02 ca │••⋄⋄ו••┊⋄U•⋄CC•×│
+│00022280│ 4f 4f 02 ca 04 00 1e 02 ┊ 00 0c 0c 02 ca 04 02 da │OO•ו⋄••┊⋄__•ו•×│
+│00022290│ 02 01 00 00 86 01 02 02 ┊ 00 55 02 00 43 43 02 ca │••⋄⋄ו••┊⋄U•⋄CC•×│
│000222a0│ 04 00 00 00 13 62 25 00 ┊ 13 62 25 00 02 01 00 00 │•⋄⋄⋄•b%⋄┊•b%⋄••⋄⋄│
│000222b0│ 00 00 00 00 00 00 00 00 ┊ 00 00 00 00 00 00 00 00 │⋄⋄⋄⋄⋄⋄⋄⋄┊⋄⋄⋄⋄⋄⋄⋄⋄│
│000222c0│ 00 00 00 00 01 02 02 7b ┊ 00 11 17 cd cf 00 7d e5 │⋄⋄⋄⋄•••{┊⋄••××⋄}×│
@@ -8958,33 +8958,33 @@
│00023f50│ 02 cf 01 00 0a 1f 6e 1d ┊ 01 00 17 ed 18 01 01 01 │•ו⋄_•n•┊•⋄•ו•••│
│00023f60│ fd c2 08 02 01 02 9f 17 ┊ 01 02 ed 16 00 00 01 28 │×ו•••ו┊••ו⋄⋄•(│
│00023f70│ d0 ab 39 36 e0 b8 23 05 ┊ f3 fa 8a f5 c3 51 98 00 │××96××#•┊×××××Q×⋄│
-│00023f80│ 00 02 00 00 00 02 08 25 ┊ 3e 02 4e 1f 02 00 00 00 │⋄•⋄⋄⋄••%┊>•N••⋄⋄⋄│
-│00023f90│ 00 00 00 15 1e 02 d9 20 ┊ 02 e0 1e 02 e4 1c 02 a9 │⋄⋄⋄•••× ┊•ו•ו•×│
-│00023fa0│ 1f 02 9b 1d 02 6a 20 02 ┊ 71 1e 02 2b 21 02 32 1f │••ו•j •┊q••+!•2•│
-│00023fb0│ 02 36 1d 02 fb 1f 02 f9 ┊ 1d 02 bd 20 02 ce 1e 02 │•6••ו•×┊••× •ו•│
-│00023fc0│ d2 1c 02 97 1f 02 89 1d ┊ 02 58 20 02 5f 1e 02 19 │ו•ו•ו┊•X •_•••│
-│00023fd0│ 21 02 20 1f 02 24 1d 02 ┊ e9 1f 02 e7 1d 02 ab 20 │!• ••$••┊ו•ו•× │
-│00023fe0│ 02 bc 1e 02 c0 1c 02 85 ┊ 1f 02 77 1d 02 46 20 02 │•ו•ו•×┊••w••F •│
-│00023ff0│ 4d 1e 02 07 21 02 0e 1f ┊ 02 12 1d 02 d7 1f 02 d5 │M•••!•••┊••••ו•×│
-│00024000│ 1d 02 99 20 02 aa 1e 02 ┊ ae 1c 02 73 1f 02 65 1d │••× •ו•┊ו•s••e•│
-│00024010│ 02 34 20 02 3b 1e 02 f5 ┊ 20 02 fc 1e 02 00 1d 02 │•4 •;••×┊ •ו•⋄••│
-│00024020│ c5 1f 02 c3 1d 02 86 20 ┊ 02 97 1e 02 9b 1c 02 60 │ו•ו•× ┊•ו•ו•`│
-│00024030│ 1f 02 52 1d 02 21 20 02 ┊ 28 1e 02 e2 20 02 e9 1e │••R••! •┊(••× •ו│
-│00024040│ 02 ed 1c 02 b2 1f 02 b0 ┊ 1d 02 73 20 02 84 1e 02 │•ו•ו•×┊••s •ו•│
-│00024050│ 3e 21 02 45 1f 02 3f 1d ┊ 02 0e 20 02 0c 1e 02 d0 │>!•E••?•┊•• •_••×│
-│00024060│ 20 02 d7 1e 02 db 1c 02 ┊ a0 1f 02 92 1d 02 61 20 │ •ו•ו•┊ו•ו•a │
-│00024070│ 02 68 1e 02 22 21 02 29 ┊ 1f 02 2d 1d 02 f2 1f 02 │•h••"!•)┊••-••ו•│
-│00024080│ f0 1d 02 b4 20 02 c5 1e ┊ 02 c9 1c 02 8e 1f 02 80 │ו•× •ו┊•ו•ו•×│
-│00024090│ 1d 02 4f 20 02 56 1e 02 ┊ 10 21 02 17 1f 02 1b 1d │••O •V••┊•!••••••│
-│000240a0│ 02 e0 1f 02 de 1d 02 a2 ┊ 20 02 b3 1e 02 b7 1c 02 │•ו•ו•×┊ •ו•ו•│
-│000240b0│ 7c 1f 02 6e 1d 02 3d 20 ┊ 02 44 1e 02 fe 20 02 05 │|••n••= ┊•D••× ••│
-│000240c0│ 1f 02 09 1d 02 ce 1f 02 ┊ cc 1d 02 8f 20 02 a0 1e │••_••ו•┊ו•× •ו│
-│000240d0│ 02 a4 1c 02 69 1f 02 5b ┊ 1d 02 2a 20 02 31 1e 02 │•ו•i••[┊••* •1••│
-│000240e0│ eb 20 02 f2 1e 02 f6 1c ┊ 02 bb 1f 02 b9 1d 02 7c │× •ו•ו┊•ו•ו•|│
-│000240f0│ 20 02 8d 1e 02 91 1c 02 ┊ 56 1f 02 48 1d 02 17 20 │ •ו•ו•┊V••H••• │
-│00024100│ 02 1e 1e 02 00 00 00 00 ┊ 00 00 00 00 00 00 00 00 │••••⋄⋄⋄⋄┊⋄⋄⋄⋄⋄⋄⋄⋄│
-│00024110│ a4 1d 02 00 00 00 7a 1e ┊ 02 34 21 02 3b 1f 02 00 │ו•⋄⋄⋄z•┊•4!•;••⋄│
-│00024120│ 00 00 04 20 02 02 1e 02 ┊ c6 20 02 47 21 02 18 32 │⋄⋄• ••••┊× •G!••2│
+│00023f80│ 00 02 00 00 00 02 08 25 ┊ 3e 02 42 1f 02 00 00 00 │⋄•⋄⋄⋄••%┊>•B••⋄⋄⋄│
+│00023f90│ 00 00 00 09 1e 02 d9 20 ┊ 02 d4 1e 02 e4 1c 02 9d │⋄⋄⋄_••× ┊•ו•ו•×│
+│00023fa0│ 1f 02 9b 1d 02 6a 20 02 ┊ 65 1e 02 2b 21 02 26 1f │••ו•j •┊e••+!•&•│
+│00023fb0│ 02 36 1d 02 fb 1f 02 ed ┊ 1d 02 bd 20 02 c2 1e 02 │•6••ו•×┊••× •ו•│
+│00023fc0│ d2 1c 02 8b 1f 02 89 1d ┊ 02 58 20 02 53 1e 02 19 │ו•ו•ו┊•X •S•••│
+│00023fd0│ 21 02 14 1f 02 24 1d 02 ┊ e9 1f 02 db 1d 02 ab 20 │!••••$••┊ו•ו•× │
+│00023fe0│ 02 b0 1e 02 c0 1c 02 79 ┊ 1f 02 77 1d 02 46 20 02 │•ו•ו•y┊••w••F •│
+│00023ff0│ 41 1e 02 07 21 02 02 1f ┊ 02 12 1d 02 d7 1f 02 c9 │A•••!•••┊••••ו•×│
+│00024000│ 1d 02 99 20 02 9e 1e 02 ┊ ae 1c 02 67 1f 02 65 1d │••× •ו•┊ו•g••e•│
+│00024010│ 02 34 20 02 2f 1e 02 f5 ┊ 20 02 f0 1e 02 00 1d 02 │•4 •/••×┊ •ו•⋄••│
+│00024020│ c5 1f 02 b7 1d 02 86 20 ┊ 02 8b 1e 02 9b 1c 02 54 │ו•ו•× ┊•ו•ו•T│
+│00024030│ 1f 02 52 1d 02 21 20 02 ┊ 1c 1e 02 e2 20 02 dd 1e │••R••! •┊•••× •ו│
+│00024040│ 02 ed 1c 02 b2 1f 02 a4 ┊ 1d 02 73 20 02 78 1e 02 │•ו•ו•×┊••s •x••│
+│00024050│ 3e 21 02 39 1f 02 3f 1d ┊ 02 0e 20 02 00 1e 02 d0 │>!•9••?•┊•• •⋄••×│
+│00024060│ 20 02 cb 1e 02 db 1c 02 ┊ 94 1f 02 92 1d 02 61 20 │ •ו•ו•┊ו•ו•a │
+│00024070│ 02 5c 1e 02 22 21 02 1d ┊ 1f 02 2d 1d 02 f2 1f 02 │•\••"!••┊••-••ו•│
+│00024080│ e4 1d 02 b4 20 02 b9 1e ┊ 02 c9 1c 02 82 1f 02 80 │ו•× •ו┊•ו•ו•×│
+│00024090│ 1d 02 4f 20 02 4a 1e 02 ┊ 10 21 02 0b 1f 02 1b 1d │••O •J••┊•!••••••│
+│000240a0│ 02 e0 1f 02 d2 1d 02 a2 ┊ 20 02 a7 1e 02 b7 1c 02 │•ו•ו•×┊ •ו•ו•│
+│000240b0│ 70 1f 02 6e 1d 02 3d 20 ┊ 02 38 1e 02 fe 20 02 f9 │p••n••= ┊•8••× •×│
+│000240c0│ 1e 02 09 1d 02 ce 1f 02 ┊ c0 1d 02 8f 20 02 94 1e │••_••ו•┊ו•× •ו│
+│000240d0│ 02 a4 1c 02 5d 1f 02 5b ┊ 1d 02 2a 20 02 25 1e 02 │•ו•]••[┊••* •%••│
+│000240e0│ eb 20 02 e6 1e 02 f6 1c ┊ 02 bb 1f 02 ad 1d 02 7c │× •ו•ו┊•ו•ו•|│
+│000240f0│ 20 02 81 1e 02 91 1c 02 ┊ 4a 1f 02 48 1d 02 17 20 │ •ו•ו•┊J••H••• │
+│00024100│ 02 12 1e 02 00 00 00 00 ┊ 00 00 00 00 00 a6 1f 02 │••••⋄⋄⋄⋄┊⋄⋄⋄⋄⋄ו•│
+│00024110│ 00 00 00 00 00 00 6e 1e ┊ 02 34 21 02 2f 1f 02 00 │⋄⋄⋄⋄⋄⋄n•┊•4!•/••⋄│
+│00024120│ 00 00 04 20 02 f6 1d 02 ┊ c6 20 02 47 21 02 18 32 │⋄⋄• •ו•┊× •G!••2│
│00024130│ 02 2c 26 02 bb 36 02 c0 ┊ 2a 02 62 3b 02 5e 2f 02 │•,&•×6•×┊*•b;•^/•│
│00024140│ bb 23 02 31 34 02 40 28 ┊ 02 e3 38 02 ec 2c 02 8f │×#•14•@(┊•×8•×,•×│
│00024150│ 3d 02 89 31 02 9e 25 02 ┊ 29 36 02 31 2a 02 d3 3a │=•×1•×%•┊)6•1*•×:│
``` | T-compiler,C-bug,A-reproducibility,WG-compiler-parallel | low | Critical |
2,465,980,357 | rust | ICE: `InterpErrorInfo(InterpErrorInfoInner { kind: UndefinedBehavior(BoundsCheckFailed` | <!--
[31mICE[0m: Rustc ./a.rs '-Zmir-opt-level=5 -Zvalidate-mir -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:433:60: 'called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: UndefinedBehavior(BoundsCheckFailed { len: 4, index: 4 }), backtrace: InterpErrorBacktrace { backtrace: None } })'', 'thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:433:60: 'called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: UndefinedBehavior(BoundsCheckFailed { len: 4, index: 4 }), backtrace: InterpErrorBacktrace { backtrace: None } })''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
pub fn function_with_bytes<const BYTES: &'static [u8; 4]>() -> &'static [u8] {
BYTES
}
pub fn main() {
assert_eq!(function_with_bytes::<b"AAAAb">(), &[0x41, 0x41, 0x41, 0x41]);
}
````
original:
````rust
// skip-filecheck
// EMIT_MIR_FOR_EACH_BIT_WIDTH
#![feature(adt_const_params, unsized_const_params)]
#![allow(incomplete_features)]
pub fn function_with_bytes<const BYTES: &'static [u8; 4]>() -> &'static [u8] {
BYTES
}
// EMIT_MIR_FOR_EACH_BIT_WIDTH
pub fn main() {
assert_eq!(function_with_bytes::<b"AAAAb">(), &[0x41, 0x41, 0x41, 0x41]);
assert_eq!(function_with_bytes::<{ &[0x41, 0x41, 0x41, 0x41] }>(), b"AAAA");
}
````
Version information
````
rustc 1.82.0-nightly (fbce03b19 2024-08-14)
binary: rustc
commit-hash: fbce03b195c02e425fbb12276b8f02349048a75f
commit-date: 2024-08-14
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
error: `&'static [u8; 4]` is forbidden as the type of a const generic parameter
--> /tmp/icemaker_global_tempdir.UIvWXNcW58ZC/rustc_testrunner_tmpdir_reporting.bReUy5HRdRlL/mvce.rs:1:41
|
1 | pub fn function_with_bytes<const BYTES: &'static [u8; 4]>() -> &'static [u8] {
| ^^^^^^^^^^^^^^^^
|
= note: the only supported types are integers, `bool` and `char`
help: add `#![feature(adt_const_params)]` to the crate attributes to enable more complex and user defined types
|
1 + #![feature(adt_const_params)]
|
help: add `#![feature(unsized_const_params)]` to the crate attributes to enable references to implement the `ConstParamTy` trait
|
1 + #![feature(unsized_const_params)]
|
error[E0308]: mismatched types
--> /tmp/icemaker_global_tempdir.UIvWXNcW58ZC/rustc_testrunner_tmpdir_reporting.bReUy5HRdRlL/mvce.rs:6:38
|
6 | assert_eq!(function_with_bytes::<b"AAAAb">(), &[0x41, 0x41, 0x41, 0x41]);
| ^^^^^^^^ expected an array with a fixed size of 4 elements, found one with 5 elements
thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:433:60:
called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: UndefinedBehavior(BoundsCheckFailed { len: 4, index: 4 }), backtrace: InterpErrorBacktrace { backtrace: None } })
stack backtrace:
0: 0x73319a7b666d - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h560b4d28c137b093
1: 0x73319b004f2f - core::fmt::write::h99766545c4efef9d
2: 0x73319bfb0ed1 - std::io::Write::write_fmt::h8e3cbf6208612263
3: 0x73319a7b8d4b - std::panicking::default_hook::{{closure}}::h6bf6ccd32e88a7b7
4: 0x73319a7b89be - std::panicking::default_hook::ha56d0025017107a4
5: 0x733199946299 - std[265a0665abe5e043]::panicking::update_hook::<alloc[f7eab8ff756c9dca]::boxed::Box<rustc_driver_impl[18c1de8e11281817]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x73319a7b9667 - std::panicking::rust_panic_with_hook::h53b891e816ad5807
7: 0x73319a7b9327 - std::panicking::begin_panic_handler::{{closure}}::h3012610e5c310f7d
8: 0x73319a7b6b29 - std::sys::backtrace::__rust_end_short_backtrace::h66811dbaa784350e
9: 0x73319a7b8ff4 - rust_begin_unwind
10: 0x733197699b63 - core::panicking::panic_fmt::he2d7dd7c7f53990c
11: 0x733197772286 - core::result::unwrap_failed::h5d31905b634d5ea8
12: 0x73319bd23d85 - rustc_const_eval[4703571d7d15c956]::const_eval::valtrees::valtree_into_mplace
13: 0x73319bd2381b - rustc_const_eval[4703571d7d15c956]::const_eval::valtrees::valtree_to_ref
14: 0x73319bc017a6 - rustc_const_eval[4703571d7d15c956]::const_eval::valtrees::valtree_to_const_value
15: 0x73319bc01561 - <rustc_const_eval[4703571d7d15c956]::provide::{closure#1} as core[12164080e42249fc]::ops::function::FnOnce<(rustc_middle[d7f4792719c666e4]::ty::context::TyCtxt, (rustc_middle[d7f4792719c666e4]::ty::Ty, rustc_middle[d7f4792719c666e4]::ty::consts::valtree::ValTree))>>::call_once
16: 0x73319bc0152e - rustc_query_impl[c2f5f95cecf69337]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c2f5f95cecf69337]::query_impl::valtree_to_const_val::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 24usize]>>
17: 0x73319bc014e7 - <rustc_query_impl[c2f5f95cecf69337]::query_impl::valtree_to_const_val::dynamic_query::{closure#2} as core[12164080e42249fc]::ops::function::FnOnce<(rustc_middle[d7f4792719c666e4]::ty::context::TyCtxt, (rustc_middle[d7f4792719c666e4]::ty::Ty, rustc_middle[d7f4792719c666e4]::ty::consts::valtree::ValTree))>>::call_once
18: 0x73319bc005ba - rustc_query_system[f8c10878fe801c76]::query::plumbing::try_execute_query::<rustc_query_impl[c2f5f95cecf69337]::DynamicConfig<rustc_query_system[f8c10878fe801c76]::query::caches::DefaultCache<(rustc_middle[d7f4792719c666e4]::ty::Ty, rustc_middle[d7f4792719c666e4]::ty::consts::valtree::ValTree), rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[c2f5f95cecf69337]::plumbing::QueryCtxt, false>
19: 0x73319bc0030a - rustc_query_impl[c2f5f95cecf69337]::query_impl::valtree_to_const_val::get_query_non_incr::__rust_end_short_backtrace
20: 0x73319bab3781 - <rustc_mir_transform[4b24bac2940eeb7a]::gvn::VnState>::insert
21: 0x73319baab3fd - <rustc_mir_transform[4b24bac2940eeb7a]::gvn::VnState>::simplify_operand
22: 0x73319baa9718 - <rustc_mir_transform[4b24bac2940eeb7a]::gvn::VnState>::simplify_rvalue
23: 0x733198b21d20 - <rustc_mir_transform[4b24bac2940eeb7a]::gvn::GVN as rustc_middle[d7f4792719c666e4]::mir::MirPass>::run_pass
24: 0x73319b002151 - rustc_mir_transform[4b24bac2940eeb7a]::pass_manager::run_passes_inner
25: 0x73319bb2ccb3 - rustc_mir_transform[4b24bac2940eeb7a]::optimized_mir
26: 0x73319bb5039b - rustc_query_impl[c2f5f95cecf69337]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c2f5f95cecf69337]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 8usize]>>
27: 0x73319b02af27 - rustc_query_system[f8c10878fe801c76]::query::plumbing::try_execute_query::<rustc_query_impl[c2f5f95cecf69337]::DynamicConfig<rustc_query_system[f8c10878fe801c76]::query::caches::DefIdCache<rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[c2f5f95cecf69337]::plumbing::QueryCtxt, false>
28: 0x73319b02a4df - rustc_query_impl[c2f5f95cecf69337]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
29: 0x733197984814 - <rustc_middle[d7f4792719c666e4]::ty::context::TyCtxt>::instance_mir
30: 0x73319b3b63f9 - rustc_interface[8b2190a255f69c87]::passes::run_required_analyses
31: 0x73319bb663de - rustc_interface[8b2190a255f69c87]::passes::analysis
32: 0x73319bb663b1 - rustc_query_impl[c2f5f95cecf69337]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c2f5f95cecf69337]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 1usize]>>
33: 0x73319bf706ee - rustc_query_system[f8c10878fe801c76]::query::plumbing::try_execute_query::<rustc_query_impl[c2f5f95cecf69337]::DynamicConfig<rustc_query_system[f8c10878fe801c76]::query::caches::SingleCache<rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[c2f5f95cecf69337]::plumbing::QueryCtxt, false>
34: 0x73319bf7044f - rustc_query_impl[c2f5f95cecf69337]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
35: 0x73319bddd169 - rustc_interface[8b2190a255f69c87]::interface::run_compiler::<core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>, rustc_driver_impl[18c1de8e11281817]::run_compiler::{closure#0}>::{closure#1}
36: 0x73319bd025d0 - std[265a0665abe5e043]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8b2190a255f69c87]::util::run_in_thread_with_globals<rustc_interface[8b2190a255f69c87]::util::run_in_thread_pool_with_globals<rustc_interface[8b2190a255f69c87]::interface::run_compiler<core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>, rustc_driver_impl[18c1de8e11281817]::run_compiler::{closure#0}>::{closure#1}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>
37: 0x73319bd02c3a - <<std[265a0665abe5e043]::thread::Builder>::spawn_unchecked_<rustc_interface[8b2190a255f69c87]::util::run_in_thread_with_globals<rustc_interface[8b2190a255f69c87]::util::run_in_thread_pool_with_globals<rustc_interface[8b2190a255f69c87]::interface::run_compiler<core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>, rustc_driver_impl[18c1de8e11281817]::run_compiler::{closure#0}>::{closure#1}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#1} as core[12164080e42249fc]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
38: 0x73319bd02fab - std::sys::pal::unix::thread::Thread::new::thread_start::hbf34cdaead1142d4
39: 0x73319d45539d - <unknown>
40: 0x73319d4da49c - <unknown>
41: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.82.0-nightly (fbce03b19 2024-08-14) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z dump-mir-dir=dir
query stack during panic:
#0 [valtree_to_const_val] converting type-level constant value to mir constant value
#1 [optimized_mir] optimizing MIR for `main`
end of query stack
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0308`.
```
</p>
</details>
<!--
query stack:
1 | pub fn function_with_bytes<const BYTES: &'static [u8; 4]>() -> &'static [u8] {
#0 [valtree_to_const_val] converting type-level constant value to mir constant value
#1 [optimized_mir] optimizing MIR for `main`
-->
@rustbot label +F-adt_const_params +F-unsized_const_params | I-ICE,T-compiler,C-bug,A-mir-opt,F-adt_const_params,S-bug-has-test,F-unsized_const_params,A-mir-opt-GVN | low | Critical |
2,465,992,147 | pytorch | [BE] Deduplicate auto_functionalized and triton_kernel_wrapper_functional | These do the same thing - they're a functional wrapper around something that is mutable.
cc @ezyang @chauhang @penguinwu @bdhirsh @oulgen @aakhundov | triaged,oncall: pt2,module: pt2-dispatcher,module: user triton | low | Minor |
2,466,007,380 | angular | Duplicate trusted type policy error when using an angular app with a webcomponent that embeds an angular app | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
- We have an angular application that uses an angular webcomponent.
- We do not allow duplicates in the CSP header ( and we would like to avoid it )
- This causes an issue, because angular cannot create its trusted policy on the webcomponent ( they are already created by the main app ), and the webcomponent cannot work properly
- Is there a workaround ( apart from allowind duplicates ) ?
- Is there a way to rename the policy that angular create ( so we can avoid creating duplicate policy ) ?
This affect every angular app that
- embeds an angular element
- has trusted type enabled and do not allow duplicate
### Please provide a link to a minimal reproduction of the bug
### Please provide the exception or error you saw
```true
TypeError: Failed to execute 'createPolicy' on 'TrustedTypePolicyFactory': Policy with name "angular" already exists.
```
### Please provide the environment you discovered this bug in (run `ng version`)
_No response_
### Anything else?
_No response_ | area: elements | low | Critical |
2,466,029,956 | pytorch | Collect tensor shapes only when using record_shapes in profiler | ### 🚀 The feature, motivation and pitch
Currently, the record_shapes option in the PyTorch Profiler holds references to tensors until profiling is completed. This behavior can lead to increased memory usage and potential memory leaks, especially when profiling large-scale models or long-running tasks. Instead of keeping references to tensors, the profiler could capture the shapes of the tensors at the moment they are encountered during profiling.
### Alternatives
_No response_
### Additional context
The tensors are held inside TorchOpStorage::torch_ops_::_input_outputs_
cc @robieta @chaekit @aaronenyeshi @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | oncall: profiler | low | Minor |
2,466,037,456 | go | proposal: golang.org/x/net/html: Add Tokenizer as Option to html.ParseWithOptions() | ### Proposal Details
### **Issue**
I was working with parsing HTML data, and I ran into an issue. When something like
`<![CDATA[ .... ]]>`
came up, the default html.Parse() method will parse that as an html comment. Calling html.Render() yields:
`<!--[CDATA[ .... ]]-->`
creating a comment that isn't useful.
String manipulation can be used to get around this, but I know that under the hood the html.Tokenizer has the method:
`func (z *Tokenizer) AllowCDATA(allowCDATA bool)`
which sets the tokenizer to process this properly.
### **Proposed Solution**
I propose that the user be allowed to set the html.Tokenizer used by the html.Parse()/html.ParseWithOptions() methods. Under the hood, ParseWithOptions() creates a new html.Tokenizer based on the io.Reader passed to it. If the user was also allowed to pass the Tokenizer used by the parser, the user could then set those options as appropriate/necessary, avoiding the above problem.
This could be solved by adding an html.ParseOption. Namely:
```
func ParseOptionWithTokenizer(tokenizer *Tokenizer) ParseOption {
return func(p *parser) {
p.tokenizer = tokenizer
}
}
```
and this would be called like:
```
tokenizer := html.NewTokenizer(myReader)
tokenizer.AllowCDATA(true)
html.ParseWithOptions(myReader, html.ParseOptionWithTokenizer(tokenizer))
```
The name of the method can be changed as well to whatever makes more sense. | Proposal | low | Minor |
2,466,118,944 | vscode | Add quick fix for npm vulnerabilities | Suggestion from Brigit that we cover this type of output with a quick fix:

Currently npm quick fixes:
https://github.com/microsoft/vscode/blob/45feb8c9e43964dd9f16c40745c0d990e707ca55/extensions/npm/src/npmMain.ts#L74-L96 | feature-request,npm | low | Minor |
2,466,166,499 | pytorch | [randperm] Add argument to disallow identiy mappings | ### 🚀 The feature, motivation and pitch
The `randperm` function is very useful, but in some scenarios I would like to disallow identity mappings; i.e. for a tensor with indices `i` and 'values' `v`, the output of `randperm` should guarentee that `i != v`.
The new definition of randperm, would then look something like
```
torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, allow_identity_mapping=True) or
torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, disallow_identity_mapping=False)
```
### Alternatives
_No response_
### Additional context
I have a simple implementation in python that does exactly this. It could serve as a wrapper around the internal `randperm` functions, by simply mutating the output of the underlying C/Cuda/.... If this, however, will not do, I am willing to attempt a foray into the depths of `ATen`.
---
Tagging you since you seem to be involved in the development of this part of the code base (based on previous issues).
@angelayi @SherlockNoMad
cc @pbelevich | triaged,module: random | low | Minor |
2,466,185,818 | kubernetes | Emulation version: Remove hard coded DefaultKubeBinaryVersion value. | Let's find a way to replace this hard coded version number with something we don't need to manually update each version:
https://github.com/kubernetes/kubernetes/blob/b6b7abc871a55ce26bc62c0d5452b73077364395/staging/src/k8s.io/component-base/version/base.go#L69
xref:
https://github.com/kubernetes/kubernetes/pull/126604/files#r1715877080 | sig/api-machinery,triage/accepted | low | Major |
2,466,201,691 | pytorch | [CPU] jx_nest_base AMP both inductor and eager performance regression in 2024-08-10 nightly release | ### 🐛 Describe the bug
<p>amp static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>jx_nest_base</td>
<td>multiple</td>
<td>32</td>
<td>1.634363</td>
<td>0.14318641</td>
<td>0.23401857060682998</td>
<td>45.792037</td>
<td>32.0</td>
<td>1.476906</td>
<td>0.09909578000000001</td>
<td>0.14635515205668</td>
<td>46.69523</td>
<td>1.11</td>
<td>0.63</td>
<td>0.69</td>
<td>1.02</td>
</tr>
</tbody>
</table>
<p>amp static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>jx_nest_base</td>
<td>multiple</td>
<td>32</td>
<td>1.701064</td>
<td>0.15762912199999998</td>
<td>0.268137224785808</td>
<td>51.498577</td>
<td>32</td>
<td>1.46995</td>
<td>0.11794990799999999</td>
<td>0.1733804672646</td>
<td>50.041447</td>
<td>1.16</td>
<td>0.65</td>
<td>0.75</td>
<td>0.97</td>
</tr>
</tbody>
</table>
the bad commit:
c7cfa5172139737bf75afbd4a7920b1a02b1dcb2
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance timm_models jx_nest_base amp first static cpp
Testing with cpp wrapper.
Testing with freezing on.
multi-threads testing....
loading model: 0it [00:01, ?it/s]
cpu eval jx_nest_base
running benchmark: 100%|████████████████████████████████████████████████████| 50/50 [00:18<00:00, 2.69it/s]
1.621x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,jx_nest_base,32,1.620818,141.661849,51.562021,0.983694,896.307200,911.164621,553,1,0,0,0,0,0
```
</table>
the last good commit
01cdcbf7c83cd84a3685766f7f8cd26ad447feae
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance timm_models jx_nest_base amp first static cpp
Testing with cpp wrapper.
Testing with freezing on.
multi-threads testing....
loading model: 0it [00:01, ?it/s]
cpu eval jx_nest_base
running benchmark: 100%|████████████████████████████████████████████████████| 50/50 [00:12<00:00, 4.16it/s]
1.437x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,jx_nest_base,32,1.436553,98.053498,47.794398,0.979158,693.798912,708.566630,553,1,0,0,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>23512dbe</td>
<td>main</td>
<td>23512dbe</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>a7912bf9dc39b934baf5e04b436cc2134776c10d</td>
<td>main</td>
<td>6ec4af6865dd884f984c9dbcb273ae26e3825481</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.4.0a0+b3f6f51</td>
<td>main</td>
<td>2.4.0a0+b3f6f51</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance timm_models jx_nest_base amp first static cpp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/c7cfa5172139737bf75afbd4a7920b1a02b1dcb2
[timm_models-jx_nest_base-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/16615577/timm_models-jx_nest_base-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log)
cc @ezyang @chauhang @penguinwu @WeizhuoZhang-intel @chuanqi129 | triaged,oncall: pt2,oncall: cpu inductor | low | Critical |
2,466,206,599 | ollama | Honor/use amdgpu.gttsize Kernel parameter to use all unified memory for AMD APU | Hi,
I do have a feature request.
**System**
- Ollama v0.3.6
- Fedora 40 (Kernel 6.10.3)
- AMD APU Ryzen 7840u / 780m
- 64 GB RAM
- ROCM 6.1.1
**Info**
Kernel 6.10.3 supports setting amdgpu.gttsize to values like 32768 which I think works around the issue many have with an AMD APU. They can only set a few values for GPU memory in their BIOS mostly not more than 8 GB. Since the memory is always the same pool, it would be nice if Ollama offloaded compute to the GPU in such cases as well.
**Example:**
Bios GPU memory allocation set to auto
amdgpu.gttsize=32768
--> ollama run gemma2:27b "Tell me a joke"
This runs at 100% on the CPU as the model is 16GB
On the other hand, running (./llama-bench) an even bigger model directly with llama.cpp on the GPU works:
> | model | size | params | backend | ngl | test | t/s |
> | llama 8x7B Q4_K - Medium | 48.25 GiB | 91.80 B | ROCm | 99 | pp512 | 81.32 ± 0.36 |
**Conclusion**
Please support the unified memory. I do know that the speed will be in numerous instances not good, but it beats pure CPU compute in performance and thermal output. I do this as a hobby, so tinkering is part of the game. If I recall correctly before using all memory + GPU did only work with the llama.cpp Vulcan backend but as they now support it for the GPU as well (at least that is my perception) would be nice to have it in Ollama as well. Please also correct me if I am wrong - maybe this is already possible, then please point me in the right direction.
| feature request | low | Major |
2,466,242,306 | godot | 3D Omni Lights cut in half after exporting to Web | ### Tested versions
v4.2.2
### System information
Windows 10
### Issue description
The Omni Lights get cut in half and the directional light also acts weird, I’m assuming It's because the browser renderer is different.
How can I fix this?
### Steps to reproduce
In editor, lights work fine, export to web, and lights are weird
### Minimal reproduction project (MRP)

| bug,platform:web,topic:rendering,needs testing,topic:3d | medium | Major |
2,466,264,073 | pytorch | PT2 inference slowdown after updating Huggingface pin on CI | From https://github.com/pytorch/pytorch/pull/133065#issuecomment-2288701447 . Basically, there was a noticeable performance drop on the inference side after bumping up the HF pin, [dashboard](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Wed%2C%2007%20Aug%202024%2013%3A09%3A28%20GMT&stopTime=Wed%2C%2014%20Aug%202024%2013%3A09%3A28%20GMT&granularity=hour&suite=torchbench&mode=inference&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=gh/desertfire/449/head&lCommit=445863995081ff442d69a567431f056c2095f2aa&rBranch=main&rCommit=c8275e25a79903daac3aa5ed4ad8fd3132ca6adb)
cc @msaroufim @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames | module: performance,triaged,oncall: pt2,module: inductor | low | Major |
2,466,299,309 | pytorch | In-place operations should be deprecated and in the future, they should be removed from PyTorch. | ### 🚀 The feature, motivation and pitch
[In-place operations with autograd](https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd) says below:
> Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them.
So, in-place operations will be rarely used as the article says above.
So, I don't think in-place operations are useful. they just make PyTorch complex.
If there are no in-place operations, for example `inplace` argument can be removed from [ReLU()](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html), [LeakyReLU()](https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html), [ELU()](https://pytorch.org/docs/stable/generated/torch.nn.ELU.html), etc for simplicity.
So, in-place operations should be deprecated and in the future, they should be removed from PyTorch.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged,needs research | low | Minor |
2,466,314,420 | flutter | [interactive_media_ads] Support companion ad formats for audio ads | ### Use case
Publishers with linear audio ads will often have an accompanying companion display ad that is rendered during audio ad playback. Native IMA SDK supports companion ad implementations on Android & iOS.
### Proposal
Requesting interactive_media_ad package support companion ad formats in line with the native IMA SDK's implementation guides:
- Android: https://developers.google.com/interactive-media-ads/docs/sdks/android/client-side/companions
- iOS: https://developers.google.com/interactive-media-ads/docs/sdks/ios/client-side/companions | c: new feature,package,c: proposal,team-ecosystem,P2,triaged-ecosystem,p: interactive_media_ads | low | Minor |
2,466,331,246 | flutter | [interactive_media_ads] Support background audio ad playback | ### Use case
Publishers with linear audio ads will often continue audio ad playback when users background the app. Audio ad background is currently supported by native IMA SDK on Android and iOS (via a dedicated IMA SDK flag).
### Proposal
interactive_media_ads package to support background audio ad playback on both Android & iOS platforms. For iOS, IMASettings will need to expose the [enabledBackgroundPlayback](https://developers.google.com/interactive-media-ads/docs/sdks/ios/client-side/reference/Classes/IMASettings#enablebackgroundplayback) flag.
iOS Guide: https://developers.google.com/interactive-media-ads/docs/sdks/ios/client-side/background_ad_playback
Android Guide: https://developers.google.com/interactive-media-ads/docs/sdks/android/client-side/background-ad-playback | c: new feature,package,c: proposal,team-ecosystem,P2,triaged-ecosystem,p: interactive_media_ads | low | Minor |
2,466,394,956 | PowerToys | Cannot install PT | ### Microsoft PowerToys version
UNKNOWN
### Installation method
Microsoft Store, WinGet
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
PT not listed as installed,
System Win 10 , Ver 22H2, OS 19045.4651
### ✔️ Expected Behavior
Tried to install via MS Store, and PS using Winget, both result in failure to install.
Log shows that it detects a previous version
### ❌ Actual Behavior
[1164:0FFC][2024-08-14T18:13:24]i101: Detected package: TerminatePowerToys, state: Absent, cached: None
[1164:0FFC][2024-08-14T18:13:24]i101: Detected package: WebView2, state: Present, cached: None
[1164:0FFC][2024-08-14T18:13:24]i101: Detected package: PowerToysUserSetup_0.83.0_x64.msi, state: Absent, cached: None
[1164:0FFC][2024-08-14T18:13:24]i052: Condition 'MinimumVersion >= DetectedPowerToysVersion' evaluates to false.
[1164:0FFC][2024-08-14T18:13:24]e000: PowerToys is already installed on this system for all users. We recommend first uninstalling that version before installing this one.
[1164:0FFC][2024-08-14T18:13:24]e000: Error 0x81f40001: Bundle condition evaluated to false: MinimumVersion >= DetectedPowerToysVersion
[1164:0FFC][2024-08-14T18:13:24]i199: Detect complete, result: 0x0
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Critical |
2,466,426,946 | react-native | Incorrect TextInput email field suggestion for "Hide my Email" (iOS) | ### Description
When entering text into an email TextField (`textContentType="emailAddress"`), the "Hide my Email" suggestion behaves incorrectly on a **plugged in iOS device** (using Apple ID with iCloud+ and thus "Hide my Email" feature):
"Hide my Email" is suggested twice. Selecting the first "Hide my email" incorrectly inserts the string "Hide my email" into the text field.
See attached screen recording.
### Steps to reproduce
1. Clone https://github.com/troyshu/expo-text-input-test
2. Plug in iOS device, enable USB debugging
3. `npm install` then `npm run ios:device` to run on device
4. Try to enter an email in the email field.
### React Native Version
0.76.1 (Expo 52)
Also occurred on React Native 0.74.5 and Expo 51: https://github.com/troyshu/expo-text-input-test/tree/979d9fc382353f4aa117cad625d00e7e1f05f3a9
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (12) arm64 Apple M2 Max
Memory: 1.28 GB / 64.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.6.0
path: /opt/homebrew/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.08.12.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "28"
- "29"
- "30"
- "31"
- "33"
- "34"
Build Tools:
- 29.0.2
- 30.0.2
- 30.0.3
- 33.0.0
- 33.0.1
- 34.0.0
System Images:
- android-30 | Google APIs Intel x86 Atom
- android-30 | Google Play Intel x86 Atom
- android-30 | Google Play Intel x86 Atom_64
- android-31 | Google APIs Intel x86 Atom_64
- android-34 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2023.1 AI-231.9392.1.2311.11330709
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.4.1
path: /usr/bin/javac
Ruby:
version: 3.2.2
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: latest
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
NA
### Reproducer
https://github.com/troyshu/expo-text-input-test
### Screenshots and Videos
https://github.com/user-attachments/assets/675b33ec-42b5-4636-9d7e-7ec2b5ae48f8
| Issue: Author Provided Repro,Component: TextInput | medium | Critical |
2,466,448,696 | terminal | Cannot create a binding to output Esc only | ### Windows Terminal version
_No response_
### Windows build number
_No response_
### Other Software
_No response_
### Steps to reproduce
Try to create a new binding to output Esc character (\x1B)
### Expected Behavior
Pressing the binding should send Esc character to the app
### Actual Behavior
Nothing gets sent, which in WSL can be confirmed by running `cat` program.
Interestingly, it is possible to create a binding that sends Escape Sequences, e.g. `Alt-q` (\x1Bq), but not the Esc itself. | Product-Conpty,Area-VT,Issue-Bug | low | Minor |
2,466,483,881 | tauri | [bug] image shows laggy | ### Describe the bug
Rust Backend: produce png(base64 encode) String continuously and quickly in another thread(capturing the screen), and use AppHandle.emit to send them to frontend.
Vue3 + Javascript Frontend: listen to update ref data, show it by using tag
```html
<img :src="data.png_base64" />
```
Result: img changes slowly and laggy.
### Reproduction
_No response_
### Expected behavior
img changing rate becomes as faster as it produced in the backend.
### Full `tauri info` output
```text
> wise-key@0.0.0 tauri
> tauri info
[✔] Environment
- OS: Mac OS 14.5.0 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.81.0-nightly (fcaa6fdfb 2024-07-13)
✔ cargo: 1.81.0-nightly (154fdac39 2024-07-07)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: nightly-aarch64-apple-darwin (default)
- node: 22.5.1
- pnpm: 9.6.0
- npm: 10.8.2
[-] Packages
- tauri [RUST]: 2.0.0-rc.2
- tauri-build [RUST]: 2.0.0-rc.2
- wry [RUST]: 0.41.0
- tao [RUST]: 0.28.1
- @tauri-apps/api [NPM]: 2.0.0-rc.0
- @tauri-apps/cli [NPM]: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,466,504,598 | vscode | New lines after code blocks aren't showing as whitespace in test assertion decoration | I expect a space before `msg`

| bug,testing | low | Minor |
2,466,550,058 | PowerToys | Add superscript plus and minus signs (Unicode 207A and 207B) to Quick Accent | ### Description of the new feature / enhancement
Add superscript plus and minus signs (Unicode 207A and 207B) to Quick Accent
### Scenario when this would be used?
These characters are used to indicate the charge of a chemical ion—e.g., Cl⁻ or Na⁺
The use of these characters is preferable compared to using a regular minus sign or plus sign and manually superscripting it. They do not look right when this is done.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Quick Accent | low | Minor |
2,466,570,252 | langchain | Cannot load Vietnamese UTF-8 CSV with UnstructuredFileLoader | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
doc = UnstructuredFileLoader(FILEPATH,encoding="utf-8",language='vi')
```
Test file
[2024_2_Công ty cổ phần HHP GLOBAL_BCDKT.csv](https://github.com/user-attachments/files/16617467/2024_2_Cong.ty.c.ph.n.HHP.GLOBAL_BCDKT.csv)
### Error Message and Stack Trace (if applicable)
```
UnicodeDecodeError Traceback (most recent call last)
Cell In[35], line 1
----> 1 custom_loader(filepath)
Cell In[34], line 12, in custom_loader(path)
10 LCTT_TT_2 = x['detailed_report_data']['LCTT-TT_2']
11 doc = UnstructuredFileLoader(BCDKT,encoding="utf-8",language='vi')
---> 12 d = doc.load()
13 print(d)
File I:\env\langchain\Lib\site-packages\langchain_core\document_loaders\base.py:30, in BaseLoader.load(self)
28 def load(self) -> List[Document]:
29 """Load data into Document objects."""
---> 30 return list(self.lazy_load())
File I:\env\langchain\Lib\site-packages\langchain_community\document_loaders\unstructured.py:89, in UnstructuredBaseLoader.lazy_load(self)
87 def lazy_load(self) -> Iterator[Document]:
88 """Load file."""
---> 89 elements = self._get_elements()
90 self._post_process_elements(elements)
91 if self.mode == "elements":
File I:\env\langchain\Lib\site-packages\langchain_community\document_loaders\unstructured.py:181, in UnstructuredFileLoader._get_elements(self)
179 if isinstance(self.file_path, Path):
180 self.file_path = str(self.file_path)
--> 181 return partition(filename=self.file_path, **self.unstructured_kwargs)
File I:\env\langchain\Lib\site-packages\unstructured\partition\auto.py:529, in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, languages, detect_language_per_element, pdf_infer_table_structure, extract_images_in_pdf, extract_image_block_types, extract_image_block_output_dir, extract_image_block_to_payload, xml_keep_tags, data_source_metadata, metadata_filename, request_timeout, hi_res_model_name, model_name, date_from_file_object, starting_page_number, **kwargs)
527 elif filetype == FileType.CSV:
528 _partition_csv = _get_partition_with_extras("csv")
--> 529 elements = _partition_csv(
530 filename=filename,
531 file=file,
532 infer_table_structure=infer_table_structure,
533 languages=languages,
534 detect_language_per_element=detect_language_per_element,
535 **kwargs,
536 )
537 elif filetype == FileType.TSV:
538 _partition_tsv = _get_partition_with_extras("tsv")
File I:\env\langchain\Lib\site-packages\unstructured\documents\elements.py:593, in process_metadata.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
591 @functools.wraps(func)
592 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> list[Element]:
--> 593 elements = func(*args, **kwargs)
594 call_args = get_call_args_applying_defaults(func, *args, **kwargs)
596 regex_metadata: dict["str", "str"] = call_args.get("regex_metadata", {})
File I:\env\langchain\Lib\site-packages\unstructured\file_utils\filetype.py:626, in add_filetype.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
624 @functools.wraps(func)
625 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:
--> 626 elements = func(*args, **kwargs)
627 params = get_call_args_applying_defaults(func, *args, **kwargs)
628 include_metadata = params.get("include_metadata", True)
File I:\env\langchain\Lib\site-packages\unstructured\file_utils\filetype.py:582, in add_metadata.<locals>.wrapper(*args, **kwargs)
580 @functools.wraps(func)
581 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:
--> 582 elements = func(*args, **kwargs)
583 call_args = get_call_args_applying_defaults(func, *args, **kwargs)
584 include_metadata = call_args.get("include_metadata", True)
File I:\env\langchain\Lib\site-packages\unstructured\chunking\dispatch.py:74, in add_chunking_strategy.<locals>.wrapper(*args, **kwargs)
71 """The decorated function is replaced with this one."""
73 # -- call the partitioning function to get the elements --
---> 74 elements = func(*args, **kwargs)
76 # -- look for a chunking-strategy argument --
77 call_args = get_call_args_applying_defaults(func, *args, **kwargs)
File I:\env\langchain\Lib\site-packages\unstructured\partition\csv.py:80, in partition_csv(filename, file, metadata_filename, metadata_last_modified, include_header, include_metadata, infer_table_structure, languages, date_from_file_object, **kwargs)
77 header = 0 if include_header else None
79 if filename:
---> 80 delimiter = get_delimiter(file_path=filename)
81 table = pd.read_csv(filename, header=header, sep=delimiter)
82 last_modification_date = get_last_modified_date(filename)
File I:\env\langchain\Lib\site-packages\unstructured\partition\csv.py:129, in get_delimiter(file_path, file)
127 elif file_path is not None:
128 with open(file_path) as f:
--> 129 data = "\n".join(f.readlines(num_bytes))
130 else:
131 raise ValueError("either `file_path` or `file` argument must be provided")
File I:\env\langchain\Lib\encodings\cp1252.py:23, in IncrementalDecoder.decode(self, input, final)
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 99: character maps to <undefined>
```
### Description
I am trying to use UnstructuredFileLoader to load an UTF-8 CSV file in Vietnamese but it seems to be encountering some encoding issue no matter the arguments that I passed to it.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.26257
> Python Version: 3.12.5 | packaged by conda-forge | (main, Aug 8 2024, 18:24:51) [MSC v.1940 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.13
> langchain_community: 0.2.7
> langsmith: 0.1.99
> langchain_anthropic: 0.1.19
> langchain_cli: 0.0.25
> langchain_google_vertexai: 1.0.6
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langchain_unstructured: 0.1.1
> langgraph: 0.1.5
> langserve: 0.2.2
Other Dependencies
------------------
> aiohttp: 3.10.3
> anthropic: 0.31.2
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.110.3
> gitpython: 3.1.43
> google-cloud-aiplatform: 1.58.0
> google-cloud-storage: 2.17.0
> httpx: 0.27.0
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> libcst: 1.4.0
> numpy: 1.26.4
> openai: 1.35.10
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pyproject-toml: 0.0.10
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.32
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> tiktoken: 0.7.0
> tomlkit: 0.12.5
> typer[all]: Installed. No version info available.
> unstructured-client: 0.24.1
> unstructured[all-docs]: Installed. No version info available.
> uvicorn: 0.23.2
| Ɑ: doc loader,🤖:bug | low | Critical |
2,466,597,935 | rust | Misleading help suggests `Sync` bound when shareable reference is passed across or into await | #59245 results in an error that may be difficult to interpret when structuring generic async code:
```rust
use std::fmt::Display;
async fn run(mut state: impl Display) {
do_stuff(&state).await;
// ...
}
async fn do_stuff(state: &impl Display) {
println!("{state}");
}
fn spawn_task<T>(state: T)
where
T: Display + Send + 'static,
{
tokio::spawn(run(state));
}
```
The compiler (as of 1.82.0-nightly (80eb5a8e9 2024-08-13)) produces this error output:
```
error[E0277]: `T` cannot be shared between threads safely
--> src/main.rs:16:18
|
16 | tokio::spawn(run(state));
| ------------ ^^^^^^^^^^ `T` cannot be shared between threads safely
| |
| required by a bound introduced by this call
|
= note: required for `&T` to implement `Send`
note: required because it's used within this `async` fn body
--> src/main.rs:8:41
|
8 | async fn do_stuff(state: &impl Display) {
| _________________________________________^
9 | | println!("{state}");
10 | | }
| |_^
note: required because it's used within this `async` fn body
--> src/main.rs:3:35
|
3 | async fn run(state: impl Display) {
| ___________________________________^
4 | | do_stuff(&state).await;
5 | | // ...
6 | | }
| |_^
note: required by a bound in `tokio::spawn`
--> /home/mzabaluev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.39.2/src/task/spawn.rs:167:21
|
165 | pub fn spawn<F>(future: F) -> JoinHandle<F::Output>
| ----- required by a bound in this function
166 | where
167 | F: Future + Send + 'static,
| ^^^^ required by this bound in `spawn`
help: consider further restricting this bound
|
14 | T: Display + Send + 'static + std::marker::Sync,
| +++++++++++++++++++
```
A non-restrictive, but also unintuitive, solution is to make the reference passed to `do_stuff` mutable (i.e. provably exclusive), even though mutability is not required by the function body.
# Desired outcome
The help heuristic should detect that the `Sync` bound arises due to a shareable reference becoming a member of an async closure for which `Send` is required, and suggest using an exclusive reference as an alternative to restricting the bound.
_Originally posted by @mzabaluev in https://github.com/rust-lang/rust/issues/59245#issuecomment-2289520996_
| A-diagnostics,T-compiler,A-async-await,AsyncAwait-Triaged,D-confusing | low | Critical |
2,466,630,261 | pytorch | vmap + autograd.Function [generate_vmap_rule=False] + torch.compile don't work together | Related: https://github.com/pytorch/pytorch/issues/129845
cc @ezyang @chauhang @penguinwu @Chillee @samdow @kshitij12345 @janeyx99 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: functorch,module: dynamo,dynamo-ctx-manager,dynamo-functorch | low | Minor |
2,466,692,487 | godot | Godot crashes after `Index p_index = 0 is out of bounds (size() = 0)` error. | ### Tested versions
- Reproducible in: Godot v4.3.rc3.official.03afb92ef
- Not reproducible in: Godot v4.3-beta1-3, Godot v4.2
### System information
Godot v4.3.rc3 - Arch Linux #1 SMP PREEMPT_DYNAMIC Sun, 11 Aug 2024 16:19:06 +0000 - Wayland - Vulkan (Forward+) - integrated Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz (4 Threads)
### Issue description
Godot will randomly print ```Parameter "sd" is null``` and ```Index p_index = 0 is out of bounds (size() = 0)``` errors and then crash, so far I've no clue on how to trigger this error.
Error:
```
ERROR: Parameter "sd" is null.
at: _shaped_text_is_ready (modules/text_server_adv/text_server_adv.cpp:6425)
ERROR: Parameter "sd" is null.
at: _shaped_text_set_custom_ellipsis (modules/text_server_adv/text_server_adv.cpp:4153)
ERROR: ShapedTextDataAdvanced invalid.
at: _shaped_text_overrun_trim_to_width (modules/text_server_adv/text_server_adv.cpp:5086)
ERROR: Parameter "sd" is null.
at: _shaped_text_get_size (modules/text_server_adv/text_server_adv.cpp:6535)
ERROR: FATAL: Index p_index = 0 is out of bounds (size() = 0).
at: get (./core/templates/cowdata.h:205)
================================================================
handle_crash: Program crashed with signal 4
Engine version: Godot Engine v4.3.rc3.official (03afb92efa18874da19f7fc185a32c005d20aa1d)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /usr/lib/libc.so.6(+0x3d1d0) [0x7bbb46a851d0] (??:0)
[2] /home/esh/.local/bin/godot_v4.3-rc3() [0x26814ee] (??:0)
[3] /home/esh/.local/bin/godot_v4.3-rc3() [0x24fc234] (??:0)
[4] /home/esh/.local/bin/godot_v4.3-rc3() [0x26f3478] (??:0)
[5] /home/esh/.local/bin/godot_v4.3-rc3() [0x48001ed] (??:0)
[6] /home/esh/.local/bin/godot_v4.3-rc3() [0x43147a4] (??:0)
[7] /home/esh/.local/bin/godot_v4.3-rc3() [0x23b6ac7] (??:0)
[8] /home/esh/.local/bin/godot_v4.3-rc3() [0x527a84] (??:0)
[9] /home/esh/.local/bin/godot_v4.3-rc3() [0x4202b2] (??:0)
[10] /usr/lib/libc.so.6(+0x25e08) [0x7bbb46a6de08] (??:0)
[11] /usr/lib/libc.so.6(__libc_start_main+0x8c) [0x7bbb46a6decc] (??:0)
[12] /home/esh/.local/bin/godot_v4.3-rc3() [0x43d45a] (??:0)
-- END OF BACKTRACE --
================================================================
```
### Steps to reproduce
As of now this error seems to happen randomly when using godot.
### Minimal reproduction project (MRP)
N/A | bug,needs testing,crash,regression | low | Critical |
2,466,705,904 | flutter | Create an iOS native looking version of the Counter app template | See https://gist.github.com/mit-mit/bacdf0db8a30d1794fed4b98887bbcfb for recommended appearance.
There should be a version of the counter app that reflect the cupertino design language, instead of material.
Discoverability will be the challenge here. Some ideas:
- Add a template to flutter create command, a la `flutter create -t ios_counter my_app`
- Check the development platform
- generate the iOS counter template automatically via `flutter create` if the development machine is a mac
- Check the connected devices
- generate the iOS counter template automatically via `flutter create` if a connected device is an iOS device | a: fidelity,f: cupertino,P2,team-design,triaged-design | low | Minor |
2,466,758,168 | flutter | [flutter-web] Migrate internal users of html to canvaskit (tracking) | null | platform-web,P1,team-web,triaged-web | medium | Minor |
2,466,795,691 | rust | Tracking Issue for struct_target_features (RFC 3525) | <!--
NOTE: For library features, please use the "Library Tracking Issue" template instead.
Thank you for creating a tracking issue! 📜 Tracking issues are for tracking a
feature from implementation to stabilisation. Make sure to include the relevant
RFC for the feature if it has one. Otherwise provide a short summary of the
feature and link any relevant PRs or issues, and remove any sections that are
not relevant to the feature.
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This is a tracking issue for the RFC "struct target features" (rust-lang/rfcs#3525).
The feature gate for the issue is `#![feature(struct_target_features)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [ ] Implement the RFC
- https://github.com/rust-lang/rust/pull/127537
- [ ] Add support for feature-carrying structs as generic parameters
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
- [ ] Should functions with witness types as arguments implement Fn traits? | T-lang,C-tracking-issue,B-experimental,F-struct_target_features | low | Critical |
2,466,796,557 | godot | Godot crashes when @tool is applied to a script that references a static var dict from another script | ### Tested versions
- Current version: steam / v4.2.2.stable.official [15073afe3]
### System information
Windows 11
### Issue description
Godot completely crashes, without error when I apply `@tool` to my script.
I can still run the project successfully by not loading the tool in the editor or by running from cmdline.
### Steps to reproduce
After a lot of testing, I have pinpointed the issue to this:
In the tool script at _ready and _process
```gdscript
unlocked = Economy.research[type]["unlocked"]
```
where Economy is
```gdscript
class_name Economy
extends Node2D
static var money := 0.0
static var research := {
NodeHandler.NodeType.SHOP: {"unlocked": true, "upgrade": 0, "max_buy": 0},
NodeHandler.NodeType.MINE: {"unlocked": true, "upgrade": 0, "max_buy": 0},
NodeHandler.NodeType.PROCESSOR: {"unlocked": true, "upgrade": 0, "max_buy": 0},
NodeHandler.NodeType.REFINERY: {"unlocked": true, "upgrade": 0, "max_buy": 0},
NodeHandler.NodeType.TETHER: {"unlocked": false, "upgrade": 0, "max_buy": 0},
NodeHandler.NodeType.DUPLICATOR: {"unlocked": false, "upgrade": 0, "max_buy": 0},
}
```
### Minimal reproduction project (MRP)
I reduced my project to the bare minimum to reproduce this error on the min-bug branch:
https://github.com/rafalou38/IdleGame/tree/min-bug
The tool script is at res://views/shop/shop_item.gd
And the crash can be resolved by comenting the line
```gdscript
@tool
# or
unlocked = Economy.research[type]["unlocked"]
``` | bug,topic:gdscript,topic:editor,crash | low | Critical |
2,466,803,049 | godot | AudioStreamPlayer2D starts playback with a noticeable delay | ### Tested versions
Reproducible in: v4.2.2.stable.mono.official [15073afe3]
### System information
Godot v4.2.2.stable.mono - macOS 14.3.0 - Vulkan (Forward+) - integrated Apple M3 Max - Apple M3 Max (14 Threads)
### Issue description
When I use `AudioStreamPlayer2D` to play a sound, the sound can be heard around 110ms after the keypress. This is a noticeable delay, and a player action such as firing a gun or pressing a button doesn't quite feel "immediate".
The sound sample I'm using does not have any silence at the beginning.
To measure the delay, I recorded an audio track with Audacity. I can clearly hear both the click of my keyboard and the sound being played, and also see the waveforms of both events in the audio track, so the delay is easy to calculate in Audacity's UI.
I measured this delay on three systems:
- MacBook Pro / M3 Max, macOS 14, Studio Display speakers
- Mac mini / M2 Pro, macOS 14, built-in speakers
- A recent gaming PC / Windows 11, wired headphones, onboard sound card
I measured a delay of around 110ms on both macs, and a delay of 160ms on the PC.
Is this something that can be improved, either in the engine, or in my game? Is the delay caused by my systems? Or is such a delay expected in Godot?
To get a few reference points, I did some research and an "Input lag" commonly refers to the latency between the physical input and the display reacting to that input. 60ms is generally considered pretty good, but this includes the response time from the display hardware itself.
In my small Godot test project, I not only play a sound but also toggle the visibility of a sprite on screen. I recorded a video at 60 fps showing how I press the button on a keyboard, and the sprite appears on the screen after 4 frames on each of the three systems, that is within 67ms.
I also wrote a small command line tool that uses Apple's AVFoundation framework to play the same sound file when I press a key. The delay in that tool is around 70ms. This feels more "immediate" than a delay of 110ms or even 160ms.
I don't know much about the signal chain and at which points in the chain various delays occur and their typical magnitudes, so I'm wondering if it's reasonable or not to expect faster response times. My naive expectation would be:
- Around 30ms latency from the keyboard
- Up to 16.6ms latency for the next physics frame update in Godot
- Up to 15ms latency for Godot's audio output latency (default value)
So around 60ms in total.
As per danluu's keyboard latency test (https://danluu.com/keyboard-latency/), Apple's keyboards are pretty fast. I'm using an Apple USB keyboard on my gaming PC and the builtin keyboard on my MacBook, so that should probably not be an issue.
### Steps to reproduce
This is how I play the sound in GDScript:
```gdscript
extends Node2D
var sound = preload("res://sound.wav")
var player
@export var sprite: Sprite2D
func _ready():
player = AudioStreamPlayer2D.new()
player.stream = sound
add_child(player)
func _physics_process(delta):
# Assigned to 'M' key
if Input.is_action_just_pressed("tap"):
player.play()
sprite.visible = !sprite.visible
```
See also the attached example project.
### Minimal reproduction project (MRP)
[InputLatencyTest.zip](https://github.com/user-attachments/files/16618761/InputLatencyTest.zip)
| discussion,topic:audio,performance | low | Major |
2,466,849,548 | material-ui | FocusTrap should not scroll the page | ### Steps to reproduce
Link to live example: https://stackblitz.com/edit/react-3ufo5w-nfzjjw?file=Demo.tsx,index.tsx
Steps:
1. Click on the input element.
2. Press Tab to change the focus to the button (which is offscreen / outside the scroll area.)
3. Scroll back up, so the button is offscreen but still focused.
4. Press Space to trigger the button's click event.
### Current behavior
The scroll area is scrolled to the button. Depending on browser and screen positioning, scrolling may occur both when the menu is shown and when it's closed.
### Expected behavior
A popup or modal that temporarily traps the focus should be self-contained; it should not cause the containing page to scroll.
### Context
I encountered this within a [tree view](https://mui.com/x/api/tree-view/simple-tree-view/)'s context menus. Focus for SimpleTreeView is (in my opinion) odd: a TreeItem's area within the DOM consists of that item plus all its descendants, so focusing the tree item causes the tree to jump around as the browser tries to make a massive item as focused as possible.
The FocusTrap behavior makes this more obvious / encountered more often, since popping up a context menu for the
See also #36508. If I understand correctly, that refers to behavior while the FocusTrap is active, while this refers to FocusTrap's restoration of focus on exit.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.6.1
Binaries:
Node: 18.20.2 - ~/.nvm/versions/node/v18.20.2/bin/node
npm: 10.5.0 - ~/.nvm/versions/node/v18.20.2/bin/npm
pnpm: Not Found
Browsers:
Chrome: 127.0.6533.100
Edge: Not Found
Safari: 17.6
npmPackages:
@emotion/react: 11.13.0
@emotion/styled: 11.13.0
@mui/base: 5.0.0-beta.41
@mui/core-downloads-tracker: 5.16.7
@mui/icons-material: 5.16.7
@mui/lab: 5.0.0-alpha.173
@mui/material: 5.16.7
@mui/private-theming: 5.16.6
@mui/styled-engine: 5.16.6
@mui/system: 5.16.7
@mui/types: 7.2.15
@mui/utils: 5.16.6
@mui/x-date-pickers: 7.12.1
@mui/x-internals: 7.12.0
@mui/x-tree-view: 7.12.1
@types/react: ^18.3.3 => 18.3.3
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.5.4 => 5.5.4
```
</details>
**Search keywords**: FocusTrap | bug 🐛,package: base-ui | low | Minor |
2,466,849,744 | godot | POINT_COORD shader builtin is broken on Windows | ### Tested versions
- Reproducible in 4.2.2.stable and 4.3-dev [8e666ad]
### System information
Godot v4.2.2.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated Intel(R) Arc(TM) A770 Graphics (Intel Corporation; 32.0.101.5768) - Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (16 Threads)
### Issue description
A polygon with a 1x1 white pixel texture (or any other texture) and a simple shader using `POINT_COORD` like the following behaves strangely on Windows while working as expected on Linux.
```glsl
shader_type canvas_item;
void fragment() {
COLOR.rgb = vec3(POINT_COORD.x);
}
```
It does not seem to be related to the Intel Arc GPU as the same happens on a different system with an RTX 4070 Ti as well.
#### Linux:
[Screencast from 2024-08-14 22-57-31.webm](https://github.com/user-attachments/assets/827998f2-9022-460f-94ae-c20408a318e7)
#### Windows:
https://github.com/user-attachments/assets/8a839fb2-8053-4aa4-b818-f181946b8a17
### Steps to reproduce
1. Add a Polygon2D (Sprite2D reproduces the bug as well)
2. Add a texture to it
3. Add a ShaderMaterial to it
4. Write a shader using the `POINT_COORD` builtin
### Minimal reproduction project (MRP)
[point-coord-bug-repro.zip](https://github.com/user-attachments/files/16618885/point-coord-bug-repro.zip)
| documentation,topic:shaders | low | Critical |
2,466,880,901 | godot | Setting button to pass mouse events makes clicking difficult on Android | ### Tested versions
- Reproducible in: v4.3.beta3.official [82cedc83c]
- Not reproducible in: v4.2.2.stable.official [15073afe3]
### System information
Android, Samsung Galaxy S10e
### Issue description
In order to make menus touch scrollable in Android, buttons must be set to pass mouse events. In Godot 4.2, the dead zone before the button passed motion events was big enough to allow consistent pressing of the button. In Godot 4.3, this dead zone seems to be practically 0, making clicking the button all but impossible.
### Steps to reproduce
Create a simple UI with a ScrollContainer, VBoxContainer and enough Buttons to get the ScrollContainer to scroll.
Set the buttons to pass mouse events (in order to make the ScrollContainer touch scrollable).
Run the project on Android.
While scrolling works, clicking the buttons is practically impossible.
### Minimal reproduction project (MRP)
The top three buttons are set to pass mouse events and work to scroll, but are difficult to click.
The bottom three buttons are set to stop mouse events and cannot be used to scroll, but can be easily clicked.
In Godot 4.2, both scrolling and clicking are easily possible on the top three buttons.
[exittest.zip](https://github.com/user-attachments/files/16619232/exittest.zip)
| bug,platform:android,topic:input,regression | low | Major |
2,466,891,002 | pytorch | Do dynamo/inductor support custom classes like TorchScript. | This is a question i got from @bnellnm on the vllm-torch slack channel:
We have a CustomClassHolder class ScalarType defined [here](https://github.com/vllm-project/vllm/blob/main/csrc/core/scalar_type.hpp). It's used in a few kernels. In particular gptq_marlin_gemm. There are some torch.library.opcheck tests for this op which pass but we are seeing graph breaks when this op is used.
```
torch._dynamo.exc.Unsupported: call_function args: TensorVariable() TensorVariable() TensorVariable() TensorVariable() TensorVariable() TensorVariable() TensorVariable() UserDefinedObjectVariable(ScriptObject) ConstantVariable(int: 2048) ConstantVariable(int: 6144) ConstantVariable(int: 4096) ConstantVariable(bool: True) ConstantVariable(bool: False) ConstantVariable(bool: True)
E
E from user code:
E File "/home/lsakka/vllm/vllm/model_executor/models/llama.py", line 429, in forward
E model_output = self.model(input_ids, positions, kv_caches,
E File "/home/lsakka/vllm/vllm/model_executor/models/llama.py", line 329, in forward
E hidden_states, residual = layer(
E File "/home/lsakka/vllm/vllm/model_executor/models/llama.py", line 251, in forward
E hidden_states = self.self_attn(
E File "/home/lsakka/vllm/vllm/model_executor/models/llama.py", line 178, in forward
E qkv, _ = self.qkv_proj(hidden_states)
E File "/home/lsakka/vllm/vllm/model_executor/layers/linear.py", line 358, in forward
E output_parallel = self.quant_method.apply(self, input_, bias)
E File "/home/lsakka/vllm/vllm/model_executor/layers/quantization/gptq_marlin.py", line 320, in apply
E return apply_gptq_marlin_linear(
E File "/home/lsakka/vllm/vllm/model_executor/layers/quantization/utils/marlin_utils.py", line 252, in apply_gptq_marlin_linear
E output = ops.gptq_marlin_gemm(reshaped_x,
E File "/home/lsakka/vllm/vllm/_custom_ops.py", line 28, in wrapper
E return fn(*args, **kwargs)
E File "/home/lsakka/vllm/vllm/_custom_ops.py", line 462, in gptq_marlin_gemm
E return torch.ops._C.gptq_marlin_gemm(a, b_q_weight, b_scales, b_zeros,
E
E Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E
E
E You can suppress this exception and fall back to eager by setting:
E import torch._dynamo
E torch._dynamo.config.suppress_errors = True
../pytorch/torch/_dynamo/exc.py:288: Unsupported
```
VVLM repo:
export VLLM_TEST_DYNAMO_GRAPH_CAPTURE=1
export VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE=1
pytest -s tests/models/test_gptq_marlin.py::test_models[5-32-half-model0]
cc @svekars @brycebortree @ezyang @chauhang @penguinwu @rec @zou3519 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | module: docs,triaged,oncall: pt2,module: dynamo,vllm-compile | low | Critical |
2,466,898,185 | PowerToys | Default User Configuration for "Run as Different User" in PowerTools Run | ### Description of the new feature / enhancement
A default user option would be added to the settings section for PowerTools Run. This would allow automatic populating of the username field when the "Run as Different User" button is clicked.
### Scenario when this would be used?
In workplaces where each user has a local admin account with a unique name, this feature would speed up the process of running applications with admin permissions. Instead of manually entering both the username and password, users would only need to input their password.
### Supporting information
_No response_ | Idea-Enhancement,Product-PowerToys Run,Needs-Triage | low | Minor |
2,466,902,828 | pytorch | functionalization doesn't faithfully reproduce strides of intermediate values | This can lead to silent incorrectness for operators that are stride-sensitive:
```py
import torch
@torch.compile(backend="aot_eager", fullgraph=True)
def f(x):
x.add_(1)
result = torch.as_strided_copy(x, (3,), (2,))
return result
x = torch.arange(10)[::2]
y = f(x)
print(y)
```
this gives different results with or without torch.compile
cc @ezyang @chauhang @penguinwu @bdhirsh | triaged,module: functionalization,oncall: pt2,module: pt2-dispatcher | low | Minor |
2,466,904,462 | Python | Want to add sliding window algorithm | ### Feature description
Want to add sliding window algorithm in DSA sub folder in array's algorithm | enhancement | low | Major |
2,466,910,338 | flutter | [google_map_flutter] IOS - Implementing custom movement and zooming unusable movement gesture | ### Steps to reproduce
Wrap the Google map widget with a gesture detector and implement custom onScaleUpdate, and onScaleEnd methods to handle movements. The script works fine on Android devices and IOS emulators, but it failed on IOS devices (iPhone 12, 15 with the latest IOS). The need for custom implementation is that the client wanted an overlay pin that is always centered on the map, when using the default behavior the zoom gesture will zoom in way makes the center shifted as the user zoom in or out.
### Expected results
The scroll movement should be close to the one offered by the package
### Actual results
On IOS the camera movement is not functioning as expected
### Code sample
<details open><summary>Code sample</summary>
```dart
class LocationSelectionPage extends StatefulWidget {
const LocationSelectionPage({super.key,});
@override
State<LocationSelectionPage> createState() => _LocationSelectionPageState();
}
class _LocationSelectionPageState extends State<LocationSelectionPage> {
final ValueNotifier<LatLng?> _selectedLatLng = ValueNotifier(null);
final ValueNotifier<bool> _isScrolling = ValueNotifier(false);
final ValueNotifier<bool> _isZooming = ValueNotifier(false);
final LatLng _defaultLatLng = const LatLng(33.2712896, 35.1964972);
GoogleMapController? _mapController;
late CameraPosition _initialCameraPosition;
double _currentZoomLevel = 15.0;
double _lastReportedScale = 1.0;
@override
void initState() {
super.initState();
_initialCameraPosition = CameraPosition(
target: _selectedLatLng.value != null
? _selectedLatLng.value!
: _defaultLatLng,
zoom: _currentZoomLevel,
);
}
Widget build(BuildContext context) {
return GestureDetector(
onScaleUpdate: _onScaleUpdate,
onScaleEnd: _onScrollEnd,
child: GoogleMap(
initialCameraPosition: _initialCameraPosition,
// onTap: _onMapTap,
// myLocationEnabled: true,
myLocationButtonEnabled: false,
onMapCreated: (GoogleMapController controller) {
_mapController = controller;
},
zoomControlsEnabled: false,
scrollGesturesEnabled: false,
zoomGesturesEnabled: false,
),
);
}
void _onScaleUpdate(ScaleUpdateDetails details) {
if (details.pointerCount == 2 && !_isZooming.value) {
_isZooming.value = true; // Start zooming when two pointers are detected
_lastReportedScale = details.scale; // Initialize the last reported scale
}
if (_isZooming.value) {
// Calculate the zoom change based on the difference from the initial scale
double zoomChange = (details.scale / _lastReportedScale - 1) *
10; // Adjust sensitivity here
// Apply a controlled zoom change
double newZoomLevel = _currentZoomLevel +
zoomChange * 0.1; // Multiply by 0.1 to ensure small steps
if (newZoomLevel >= 0) {
_mapController?.moveCamera(CameraUpdate.zoomTo(newZoomLevel));
_currentZoomLevel = newZoomLevel;
}
// Update the scale only after applying changes
_lastReportedScale = details.scale;
} else if (details.scale == 1.0 && _isZooming.value) {
// End the zoom operation
_isZooming.value = false;
_lastReportedScale = 1.0; // Reset scale to neutral when zoom ends
}
// Handle map dragging if not zooming
if (!_isZooming.value) {
_isScrolling.value = true;
if (Platform.isIOS) {
_mapController?.moveCamera(
CameraUpdate.scrollBy(
-details.focalPointDelta.dx,
-details.focalPointDelta.dy,
),
);
} else {
_mapController?.moveCamera(
CameraUpdate.scrollBy(
-details.focalPointDelta.dx,
-details.focalPointDelta.dy,
),
);
}
}
}
void _onScrollEnd(ScaleEndDetails details) {
// Reset scroll/zoom values
_lastReportedScale = 1.0;
_isScrolling.value = false;
_isZooming.value = false;
// Set the location to the center of the screen
_getScreenLatLng(
MediaQuery.of(context).size.center(Offset.zero),
)?.then((latLng) => _selectedLatLng.value = latLng);
}
}
class LocationCustomPin extends StatelessWidget {
const LocationCustomPin({
super.key,
this.pinColor = const Color(0xFFD11149),
});
final Color pinColor;
@override
Widget build(BuildContext context) {
return Column(
mainAxisSize: MainAxisSize.min,
children: [
Assets.images.svg.location.svg(
width: 56,
height: 56,
colorFilter: ColorFilter.mode(
pinColor,
BlendMode.srcIn,
),
),
Container(
height: 8,
width: 8,
decoration: BoxDecoration(
shape: BoxShape.circle,
color: pinColor,
),
),
],
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>The camera keeps returning to the middle</summary>
https://github.com/user-attachments/assets/59323ff2-db39-4912-b59b-a86010dbff43
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.5 23F79 darwin-arm64, locale en-LB)
• Flutter version 3.24.0 on channel stable at /Users/yazan/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (2 weeks ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc4)
• Android SDK at /Users/yazan/Library/Android/sdk
• Platform android-35, build-tools 35.0.0-rc4
• ANDROID_HOME = /Users/yazan/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.92.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (5 available)
• SM A736B (mobile) • *** • android-arm64 • Android 14 (API 34)
• iPhone 15 Pro Max (mobile) • *** • ios • com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.119
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-ios,p: maps,package,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24 | low | Critical |
2,466,914,654 | ollama | docker container can't detect Nvidia GPU - intermittent "cuda driver library failed to get device context 801" | ### What is the issue?
OS: Ubuntu 24.04 LTS
GPU: Nvidia Tesla P40 (24G)
I installed ollama without docker and it was able to utilise my gpu without any issues.
I then deployed ollama using the following docker compose file:
```
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
environment:
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
- OLLAMA_KEEP_ALIVE=24h
- ENABLE_IMAGE_GENERATION=True
- COMFYUI_BASE_URL=http://stable-diffusion-webui:7860
networks:
- traefik
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./ollama:/root/.ollama
labels:
- "traefik.enable=true"
- "traefik.http.routers.ollama.rule=Host(`ollama.local.example.com`)"
- "traefik.http.routers.ollama.entrypoints=https"
- "traefik.http.routers.ollama.tls=true"
- "traefik.http.routers.ollama.tls.certresolver=cloudflare"
- "traefik.http.routers.ollama.middlewares=default-headers@file"
- "traefik.http.routers.ollama.middlewares=ollama-auth"
- "traefik.http.services.ollama.loadbalancer.server.port=11434"
- "traefik.http.routers.ollama.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${OLLAMA_API_CREDENTIALS}"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
```
When I exec in to the container and run `nvidia-smi` it successfully executes it from `within` the ollama docker container.
but the logs show that it can't detect my gpu?
```
2024/08/14 22:50:17 routes.go:1123: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T22:50:18.674+01:00 level=INFO source=images.go:782 msg="total blobs: 5"
time=2024-08-14T22:50:18.675+01:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T22:50:18.677+01:00 level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)"
time=2024-08-14T22:50:18.678+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2940291930/runners
time=2024-08-14T22:50:30.626+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-14T22:50:30.626+01:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T22:50:30.640+01:00 level=INFO source=gpu.go:260 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801"
time=2024-08-14T22:50:30.640+01:00 level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T22:50:30.640+01:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="47.1 GiB" available="43.9 GiB"
2024/08/14 22:54:19 routes.go:1123: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T22:54:19.967+01:00 level=INFO source=images.go:782 msg="total blobs: 5"
time=2024-08-14T22:54:20.012+01:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T22:54:20.013+01:00 level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)"
time=2024-08-14T22:54:20.032+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1278819119/runners
```
not sure why??
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.5 | bug,nvidia,needs more info,docker | medium | Critical |
2,466,916,769 | rust | ICE: `write_immediate_to_mplace: invalid Scalar layout: TyAndLayout` | <!--
[31mICE[0m: Rustc ./a.rs '-Zmir-opt-level=5 -Zvalidate-mir -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: /rustc/fbce03b195c02e425fbb12276b8f02349048a75f/compiler/rustc_const_eval/src/interpret/place.rs:693:21: write_immediate_to_mplace: invalid Scalar layout: TyAndLayout {', 'error: internal compiler error: /rustc/fbce03b195c02e425fbb12276b8f02349048a75f/compiler/rustc_const_eval/src/interpret/place.rs:693:21: write_immediate_to_mplace: invalid Scalar layout: TyAndLayout {'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
extern "C" {
pub static mut symbol: [i8];
}
fn main() {
println!("C", unsafe { &symbol });
}
````
original:
````rust
extern "C" {
pub static mut symbol: [i8];
//~^ WARN creating a shared reference to mutable static is discouraged [static_mut_refs]
}
fn main() {
println!("C", unsafe { &symbol });
//~^ WARN creating a shared reference to mutable static is discouraged [static_mut_refs]
}
````
Version information
````
rustc 1.82.0-nightly (fbce03b19 2024-08-14)
binary: rustc
commit-hash: fbce03b195c02e425fbb12276b8f02349048a75f
commit-date: 2024-08-14
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
error: argument never used
--> /tmp/icemaker_global_tempdir.gKK7s1KquQAJ/rustc_testrunner_tmpdir_reporting.JRe5Vq1I3rjY/mvce.rs:7:19
|
7 | println!("C", unsafe { &symbol });
| --- ^^^^^^^^^^^^^^^^^^ argument never used
| |
| formatting specifier missing
error[E0277]: the size for values of type `[i8]` cannot be known at compilation time
--> /tmp/icemaker_global_tempdir.gKK7s1KquQAJ/rustc_testrunner_tmpdir_reporting.JRe5Vq1I3rjY/mvce.rs:2:28
|
2 | pub static mut symbol: [i8];
| ^^^^ doesn't have a size known at compile-time
|
= help: the trait `Sized` is not implemented for `[i8]`
warning: creating a shared reference to mutable static is discouraged
--> /tmp/icemaker_global_tempdir.gKK7s1KquQAJ/rustc_testrunner_tmpdir_reporting.JRe5Vq1I3rjY/mvce.rs:7:28
|
7 | println!("C", unsafe { &symbol });
| ^^^^^^^ shared reference to mutable static
|
= note: for more information, see issue #114447 <https://github.com/rust-lang/rust/issues/114447>
= note: this will be a hard error in the 2024 edition
= note: this shared reference has lifetime `'static`, but if the static ever gets mutated, or a mutable reference is created, then any further use of this shared reference is Undefined Behavior
= note: `#[warn(static_mut_refs)]` on by default
help: use `addr_of!` instead to create a raw pointer
|
7 | println!("C", unsafe { addr_of!(symbol) });
| ~~~~~~~~~ +
error: internal compiler error: /rustc/fbce03b195c02e425fbb12276b8f02349048a75f/compiler/rustc_const_eval/src/interpret/place.rs:693:21: write_immediate_to_mplace: invalid Scalar layout: TyAndLayout {
ty: &[i8],
layout: Layout {
size: Size(16 bytes),
align: AbiAndPrefAlign {
abi: Align(8 bytes),
pref: Align(8 bytes),
},
abi: ScalarPair(
Initialized {
value: Pointer(
AddressSpace(
0,
),
),
valid_range: 1..=18446744073709551615,
},
Initialized {
value: Int(
I64,
false,
),
valid_range: 0..=18446744073709551615,
},
),
fields: Arbitrary {
offsets: [
Size(0 bytes),
Size(8 bytes),
],
memory_index: [
0,
1,
],
},
largest_niche: Some(
Niche {
offset: Size(0 bytes),
value: Pointer(
AddressSpace(
0,
),
),
valid_range: 1..=18446744073709551615,
},
),
variants: Single {
index: 0,
},
max_repr_align: None,
unadjusted_abi_align: Align(8 bytes),
},
}
thread 'rustc' panicked at /rustc/fbce03b195c02e425fbb12276b8f02349048a75f/compiler/rustc_const_eval/src/interpret/place.rs:693:21:
Box<dyn Any>
stack backtrace:
0: 0x7dabe2fb666d - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h560b4d28c137b093
1: 0x7dabe3804f2f - core::fmt::write::h99766545c4efef9d
2: 0x7dabe47b0ed1 - std::io::Write::write_fmt::h8e3cbf6208612263
3: 0x7dabe2fb8d4b - std::panicking::default_hook::{{closure}}::h6bf6ccd32e88a7b7
4: 0x7dabe2fb89be - std::panicking::default_hook::ha56d0025017107a4
5: 0x7dabe2146299 - std[265a0665abe5e043]::panicking::update_hook::<alloc[f7eab8ff756c9dca]::boxed::Box<rustc_driver_impl[18c1de8e11281817]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x7dabe2fb9667 - std::panicking::rust_panic_with_hook::h53b891e816ad5807
7: 0x7dabe2180b41 - std[265a0665abe5e043]::panicking::begin_panic::<rustc_errors[99e98b0bbb24b16c]::ExplicitBug>::{closure#0}
8: 0x7dabe2173d26 - std[265a0665abe5e043]::sys::backtrace::__rust_end_short_backtrace::<std[265a0665abe5e043]::panicking::begin_panic<rustc_errors[99e98b0bbb24b16c]::ExplicitBug>::{closure#0}, !>
9: 0x7dabe2173aa6 - std[265a0665abe5e043]::panicking::begin_panic::<rustc_errors[99e98b0bbb24b16c]::ExplicitBug>
10: 0x7dabe2189cc1 - <rustc_errors[99e98b0bbb24b16c]::diagnostic::BugAbort as rustc_errors[99e98b0bbb24b16c]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
11: 0x7dabe283efcd - <rustc_errors[99e98b0bbb24b16c]::DiagCtxtHandle>::span_bug::<rustc_span[e524fe640245e945]::span_encoding::Span, alloc[f7eab8ff756c9dca]::string::String>
12: 0x7dabe28719f8 - rustc_middle[d7f4792719c666e4]::util::bug::opt_span_bug_fmt::<rustc_span[e524fe640245e945]::span_encoding::Span>::{closure#0}
13: 0x7dabe2871a2a - rustc_middle[d7f4792719c666e4]::ty::context::tls::with_opt::<rustc_middle[d7f4792719c666e4]::util::bug::opt_span_bug_fmt<rustc_span[e524fe640245e945]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7dabe285dd0b - rustc_middle[d7f4792719c666e4]::ty::context::tls::with_context_opt::<rustc_middle[d7f4792719c666e4]::ty::context::tls::with_opt<rustc_middle[d7f4792719c666e4]::util::bug::opt_span_bug_fmt<rustc_span[e524fe640245e945]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7dabe285cb27 - rustc_middle[d7f4792719c666e4]::util::bug::span_bug_fmt::<rustc_span[e524fe640245e945]::span_encoding::Span>
16: 0x7dabe42a8335 - <rustc_const_eval[4703571d7d15c956]::interpret::eval_context::InterpCx<rustc_const_eval[4703571d7d15c956]::const_eval::dummy_machine::DummyMachine>>::write_immediate_to_mplace_no_validate
17: 0x7dabe1324a74 - <rustc_mir_transform[4b24bac2940eeb7a]::gvn::GVN as rustc_middle[d7f4792719c666e4]::mir::MirPass>::run_pass
18: 0x7dabe3802151 - rustc_mir_transform[4b24bac2940eeb7a]::pass_manager::run_passes_inner
19: 0x7dabe432ccb3 - rustc_mir_transform[4b24bac2940eeb7a]::optimized_mir
20: 0x7dabe435039b - rustc_query_impl[c2f5f95cecf69337]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c2f5f95cecf69337]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 8usize]>>
21: 0x7dabe382af27 - rustc_query_system[f8c10878fe801c76]::query::plumbing::try_execute_query::<rustc_query_impl[c2f5f95cecf69337]::DynamicConfig<rustc_query_system[f8c10878fe801c76]::query::caches::DefIdCache<rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[c2f5f95cecf69337]::plumbing::QueryCtxt, false>
22: 0x7dabe382a4df - rustc_query_impl[c2f5f95cecf69337]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
23: 0x7dabe0184814 - <rustc_middle[d7f4792719c666e4]::ty::context::TyCtxt>::instance_mir
24: 0x7dabe3bb63f9 - rustc_interface[8b2190a255f69c87]::passes::run_required_analyses
25: 0x7dabe43663de - rustc_interface[8b2190a255f69c87]::passes::analysis
26: 0x7dabe43663b1 - rustc_query_impl[c2f5f95cecf69337]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[c2f5f95cecf69337]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 1usize]>>
27: 0x7dabe47706ee - rustc_query_system[f8c10878fe801c76]::query::plumbing::try_execute_query::<rustc_query_impl[c2f5f95cecf69337]::DynamicConfig<rustc_query_system[f8c10878fe801c76]::query::caches::SingleCache<rustc_middle[d7f4792719c666e4]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[c2f5f95cecf69337]::plumbing::QueryCtxt, false>
28: 0x7dabe477044f - rustc_query_impl[c2f5f95cecf69337]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
29: 0x7dabe45dd169 - rustc_interface[8b2190a255f69c87]::interface::run_compiler::<core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>, rustc_driver_impl[18c1de8e11281817]::run_compiler::{closure#0}>::{closure#1}
30: 0x7dabe45025d0 - std[265a0665abe5e043]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8b2190a255f69c87]::util::run_in_thread_with_globals<rustc_interface[8b2190a255f69c87]::util::run_in_thread_pool_with_globals<rustc_interface[8b2190a255f69c87]::interface::run_compiler<core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>, rustc_driver_impl[18c1de8e11281817]::run_compiler::{closure#0}>::{closure#1}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>
31: 0x7dabe4502c3a - <<std[265a0665abe5e043]::thread::Builder>::spawn_unchecked_<rustc_interface[8b2190a255f69c87]::util::run_in_thread_with_globals<rustc_interface[8b2190a255f69c87]::util::run_in_thread_pool_with_globals<rustc_interface[8b2190a255f69c87]::interface::run_compiler<core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>, rustc_driver_impl[18c1de8e11281817]::run_compiler::{closure#0}>::{closure#1}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[12164080e42249fc]::result::Result<(), rustc_span[e524fe640245e945]::ErrorGuaranteed>>::{closure#1} as core[12164080e42249fc]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
32: 0x7dabe4502fab - std::sys::pal::unix::thread::Thread::new::thread_start::hbf34cdaead1142d4
33: 0x7dabe5dab39d - <unknown>
34: 0x7dabe5e3049c - <unknown>
35: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.82.0-nightly (fbce03b19 2024-08-14) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z dump-mir-dir=dir
query stack during panic:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 3 previous errors; 1 warning emitted
For more information about this error, try `rustc --explain E0277`.
```
</p>
</details>
<!--
query stack:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
-->
| I-ICE,T-compiler,C-bug,S-bug-has-test | low | Critical |
2,466,931,176 | kubernetes | [Bug] Scheduler fails to schedule a pod due to a race condition | ### What happened?
A bug in the scheduler increases the time spent on scheduling a pod **from <1 second to 5 minutes**.
We discovered the bug when repeating the steps described in a fixed bug report [#106780](https://github.com/kubernetes/kubernetes/issues/106780). The setting and steps are as follows:
We have 1 node with 32Gi memory, and 3 pods to schedule:
p1 / request 10Gi memory / low-priority,
p2 / request 25Gi memory / medium-priority,
p3 / request 20Gi memory / high-priority.
We perform the following steps:
1. add `p1` and wait until `p1` is running
2. add `p2` and wait until `p1` is terminating; this is because there are not enough resources to host both pods and `p2` has a higher priority.
**Note that at this point, `p2` is not running yet.**
3. add `p3` and wait until `p3` is running; `p2` is pending because `p2` and `p3` cannot coexist on `node0`.
4. re-add `p1` and wait until `p1` is running; `p1` should be able to run since there are enough resources to host both pods: 10 + 20 < 32
Interestingly, we find that if step 4 happened immediately after step 3, then `p1` fails to get scheduled with the reason `Insufficient Memory` and eventually gets scheduled after 5 minutes. If there is some short period between step 3 and step 4 (say 2 seconds), then `p1` is scheduled properly immediately.
#### What is the root cause?
After code and log inspections (see below), we have found the root cause of this bug: it's caused by the race condition between (A) `p1` is re-added and handled by the scheduler and (B) `p2`'s `nominated_node_name` is cleared. If A happens before B, then the bug occurs.
#### Why does 2 second make the difference?
The reason is that `p2`'s `nominated_node_name` is cleared *when it (a nominated pod) is scheduled again and failed*. But as the `p2`'s first schedule failed (need to wait for preemption to finish), it's put to BackoffQueue and need to wait for a few seconds before it's put back to ActionQueue and scheduled again.
So
1. If `p1` is added before `p2` is scheduled again (without 2s sleep), it will **fail** as `p2`'s `nomianted_node_name` is not cleared.
2. But if we add the 2s sleep, `p2` is scheduled again and its `nominated_node_name` is cleared before `p1` is added. In this case, `p1` will be **schedulable** to node0.
<details><summary>Some scheduler logs of the bug-free and buggy trace</summary>
The lines starting with # are logged just before scheduling p1 and p2. You can notice that their order is reversed in the two traces.
Bug-free trace (B happens before A):
```markdown
# with sleep(2s)
I0814 14:19:47.939731 32287 eventhandlers.go:149] "Add event for unscheduled pod" pod="default/p3"
I0814 14:19:47.939756 32287 schedule_one.go:83] "About to try and schedule pod" pod="default/p3"
I0814 14:19:47.939768 32287 schedule_one.go:96] "Attempting to schedule pod" pod="default/p3"
I0814 14:19:47.939878 32287 default_binder.go:53] "Attempting to bind pod to node" logger="Bind.DefaultBinder" pod="default/p3" node="node0"
I0814 14:19:47.944902 32287 eventhandlers.go:313] "Delete event for scheduled pod" pod="default/p1"
I0814 14:19:47.948208 32287 scheduling_queue.go:1312] "Pod moved to an internal scheduling queue" pod="default/p2" event="AssignedPodDelete" queue="Backoff" hint=1
I0814 14:19:47.948799 32287 schedule_one.go:314] "Successfully bound pod to node" pod="default/p3" node="node0" evaluatedNodes=1 feasibleNodes=1
I0814 14:19:47.948891 32287 eventhandlers.go:201] "Delete event for unscheduled pod" pod="default/p3"
I0814 14:19:47.948901 32287 eventhandlers.go:231] "Add event for scheduled pod" pod="default/p3"
I0814 14:19:47.956662 32287 eventhandlers.go:268] "Update event for scheduled pod" pod="default/p3"
I0814 14:19:49.047040 32287 schedule_one.go:83] "About to try and schedule pod" pod="default/p2"
# I0814 14:19:49.047238 32287 schedule_one.go:96] "Attempting to schedule pod" pod="default/p2"
I0814 14:19:49.047886 32287 schedule_one.go:1055] "Unable to schedule pod; no fit; waiting" pod="default/p2" err="0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod."
I0814 14:19:49.048895 32287 schedule_one.go:1122] "Updating pod condition" pod="default/p2" conditionType="PodScheduled" conditionStatus="False" conditionReason="Unschedulable"
I0814 14:19:49.064048 32287 eventhandlers.go:174] "Update event for unscheduled pod" pod="default/p2"
I0814 14:19:49.978428 32287 eventhandlers.go:149] "Add event for unscheduled pod" pod="default/p1"
I0814 14:19:49.978562 32287 schedule_one.go:83] "About to try and schedule pod" pod="default/p1"
# I0814 14:19:49.978580 32287 schedule_one.go:96] "Attempting to schedule pod" pod="default/p1"
I0814 14:19:49.978879 32287 default_binder.go:53] "Attempting to bind pod to node" logger="Bind.DefaultBinder" pod="default/p1" node="node0"
I0814 14:19:49.985438 32287 eventhandlers.go:201] "Delete event for unscheduled pod" pod="default/p1"
I0814 14:19:49.985476 32287 eventhandlers.go:231] "Add event for scheduled pod" pod="default/p1"
I0814 14:19:49.985747 32287 schedule_one.go:314] "Successfully bound pod to node" pod="default/p1" node="node0" evaluatedNodes=1 feasibleNodes=1
```
Buggy trace (A happens before B):
```markdown
# without sleep(2s)
I0814 14:17:47.151445 31615 eventhandlers.go:149] "Add event for unscheduled pod" pod="default/p3"
I0814 14:17:47.151515 31615 eventhandlers.go:174] "Update event for unscheduled pod" pod="default/p2"
I0814 14:17:47.151802 31615 schedule_one.go:83] "About to try and schedule pod" pod="default/p3"
I0814 14:17:47.151812 31615 schedule_one.go:96] "Attempting to schedule pod" pod="default/p3"
I0814 14:17:47.151926 31615 default_binder.go:53] "Attempting to bind pod to node" logger="Bind.DefaultBinder" pod="default/p3" node="node0"
I0814 14:17:47.156395 31615 eventhandlers.go:313] "Delete event for scheduled pod" pod="default/p1"
I0814 14:17:47.156444 31615 scheduling_queue.go:1312] "Pod moved to an internal scheduling queue" pod="default/p2" event="AssignedPodDelete" queue="Backoff" hint=1
I0814 14:17:47.160464 31615 eventhandlers.go:201] "Delete event for unscheduled pod" pod="default/p3"
I0814 14:17:47.160479 31615 eventhandlers.go:231] "Add event for scheduled pod" pod="default/p3"
I0814 14:17:47.160514 31615 schedule_one.go:314] "Successfully bound pod to node" pod="default/p3" node="node0" evaluatedNodes=1 feasibleNodes=1
I0814 14:17:47.168539 31615 eventhandlers.go:268] "Update event for scheduled pod" pod="default/p3"
I0814 14:17:47.177608 31615 eventhandlers.go:149] "Add event for unscheduled pod" pod="default/p1"
I0814 14:17:47.177642 31615 schedule_one.go:83] "About to try and schedule pod" pod="default/p1"
# I0814 14:17:47.177650 31615 schedule_one.go:96] "Attempting to schedule pod" pod="default/p1"
I0814 14:17:47.177775 31615 schedule_one.go:1055] "Unable to schedule pod; no fit; waiting" pod="default/p1" err="0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod."
I0814 14:17:47.177802 31615 schedule_one.go:1122] "Updating pod condition" pod="default/p1" conditionType="PodScheduled" conditionStatus="False" conditionReason="Unschedulable"
I0814 14:17:47.187472 31615 eventhandlers.go:174] "Update event for unscheduled pod" pod="default/p1"
I0814 14:17:48.138665 31615 schedule_one.go:83] "About to try and schedule pod" pod="default/p2"
# I0814 14:17:48.139950 31615 schedule_one.go:96] "Attempting to schedule pod" pod="default/p2"
I0814 14:17:48.140305 31615 schedule_one.go:1055] "Unable to schedule pod; no fit; waiting" pod="default/p2" err="0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod."
I0814 14:17:48.140582 31615 schedule_one.go:1122] "Updating pod condition" pod="default/p2" conditionType="PodScheduled" conditionStatus="False" conditionReason="Unschedulable"
I0814 14:17:48.156537 31615 eventhandlers.go:174] "Update event for unscheduled pod" pod="default/p2"
```
</details>
#### How to fix it?
1. A potential fix is:
Register all plugins that need to be aware of pods to events triggered by updates to a pod's `nominated_node_name`. This way, a failed pod will be retried as soon as any pod's `nominated_node_name` is cleared.
2. If this is too radical and can cause performance issues, another solution is:
When a pod is bound to a node with nominated pods, either
a) re-check the previously nominated pods (that nominated to this node),
b) or clear the `nominated_node_name` of these nominated pods.
In this way, upcoming pods won't fail unreasonably and won't have to wait 5 minutes for the next retry.
### What did you expect to happen?
`p1` should be immediately scheduled.
### How can we reproduce it (as minimally and precisely as possible)?
We're using [kwok](https://kwok.sigs.k8s.io/) to reproduce this issue.
```yaml
# node0.yaml
apiVersion: v1
kind: Node
metadata:
name: node0
labels:
kubernetes.io/hostname: node0
status:
allocatable:
cpu: "32"
memory: "32Gi"
pods: "110"
capacity:
cpu: "32"
memory: "32Gi"
pods: "110"
```
```yaml
# p1.yaml
apiVersion: v1
kind: Pod
metadata:
name: p1
spec:
containers:
- name: p1-container
image: nginx
resources:
requests:
memory: "10Gi"
priorityClassName: low-priority
```
```yaml
# p2.yaml
apiVersion: v1
kind: Pod
metadata:
name: p2
spec:
containers:
- name: p2-container
image: nginx
resources:
requests:
memory: "25Gi"
priorityClassName: medium-priority
```
```yaml
# p3.yaml
apiVersion: v1
kind: Pod
metadata:
name: p3
spec:
containers:
- name: p3-container
image: nginx
resources:
requests:
memory: "20Gi"
priorityClassName: high-priority
```
```yaml
# priority_class.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: medium-priority
value: 500000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-priority
value: 0
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
```
```python
# To run this reproduction script, please create a new KWOK cluster first.
# kwokctl create cluster --v DEBUG
# Also need to install the kubernetes python client library,
# and place all yaml files in the same folder with the script.
# ---
# To check the scheduler logs:
# kwokctl logs kube-scheduler
# ---
# To reproduce the bug case, please comment out `sleep(2)` between step 5 and step 6.
# To reproduce the normal case, please add `sleep(2)` between step 5 and step 6.
import unittest
import time
import shutil
import logging
from time import sleep
from os import path, makedirs
from logging import getLogger
from kubernetes import config, watch
from kubernetes.client import *
from kubernetes.utils import *
from time import strftime
logger = getLogger(__name__)
config.load_kube_config()
v1 = CoreV1Api()
k8s_cli = ApiClient()
log_dir = path.dirname(__file__)
if __name__ == "__main__":
log_path = path.join(log_dir, f"issue-106780-reproduction-{strftime('%Y-%m-%d-%H-%M-%S')}")
shutil.rmtree(log_path, ignore_errors=True)
makedirs(log_path, exist_ok=True)
log_file = path.join(log_path, 'reproduce.log')
logging.basicConfig(level=logging.INFO,
filemode='w',
filename=log_file,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%H:%M:%S')
node0_yaml = path.join(path.dirname(__file__), 'node0.yaml')
p1_yaml = path.join(path.dirname(__file__), 'p1.yaml')
p2_yaml = path.join(path.dirname(__file__), 'p2.yaml')
p3_yaml = path.join(path.dirname(__file__), 'p3.yaml')
priority_class_yaml = path.join(path.dirname(__file__), 'priority_class.yaml')
w = watch.Watch()
# 1. create priority class
create_from_yaml(k8s_client=k8s_cli, yaml_file=priority_class_yaml)
logger.info(f"PriorityClass created")
# 2. add node0
create_from_yaml(k8s_client=k8s_cli, yaml_file=node0_yaml)
logger.info(f"Node: node0 created")
# 3. add p1
create_from_yaml(k8s_client=k8s_cli, yaml_file=p1_yaml)
logger.info(f"Pod: p1 added")
# 3. check p1 is running
p1_running = False
for event in w.stream(v1.list_namespaced_pod,
field_selector=f'metadata.name=p1',
namespace="default",
timeout_seconds=int(5)):
p1 = event['object']
if event['object'].status.phase == "Running":
p1_running = True
logger.info(f"Pod: {p1.metadata.name} scheduled on node {p1.spec.node_name}")
break
assert p1_running
# 4. add p2
create_from_yaml(k8s_client=k8s_cli, yaml_file=p2_yaml)
logger.info(f"Pod: p2 added")
# 4. check p1 is terminating (by p2)
for event in w.stream(v1.list_namespaced_pod,
field_selector=f'metadata.name=p1',
namespace="default",
timeout_seconds=int(3)):
logger.info(f"Pod: {event['object'].metadata.name}, event: {event['type']}")
logger.info(
f"deletion_timestamp: {event['object'].metadata.deletion_timestamp}, status: {event['object'].status.phase}")
if event['object'].metadata.deletion_timestamp is not None and event['object'].status.phase in (
'Pending', 'Running'):
logger.info(f"Pod: {p1.metadata.name} is terminating")
break
# 5. add p3 and wait for p3 running (p3 will terminate p2 / let p2 pending)
create_from_yaml(k8s_client=k8s_cli, yaml_file=p3_yaml)
p3_running = False
for event in w.stream(v1.list_namespaced_pod,
field_selector=f'metadata.name=p3',
namespace="default",
timeout_seconds=int(5)):
p3 = event['object']
if p3.status.phase == "Running":
logger.info(f"Pod: {p3.metadata.name} scheduled on node p3")
p3_running = True
break
assert p3_running
# sleep(2) # -> without this on 1.30.2 will cause pod scheduled failed at the first try
# 6. reapply and check p1 schedulable
create_from_yaml(k8s_client=k8s_cli, yaml_file=p1_yaml)
tic = time.time()
logger.info(f"Pod: p1 re-applied")
# 7. watch pod1 events and wait it to be scheduled
p1_running = False
for event in w.stream(v1.list_namespaced_pod,
field_selector=f'metadata.name=p1',
namespace="default",
timeout_seconds=int(600)):
p1 = event['object']
if p1.status.phase == "Running":
p1_running = True
logger.info(f"Pod: {p1.metadata.name} scheduled on node {p1.spec.node_name}")
break
if not p1_running:
logger.error(f"Pod: {p1.metadata.name} is not scheduled in 600s")
assert False
else:
logger.info(f"Pod: {p1.metadata.name}, " + f"time between add and scheduled: {time.time() - tic}")
```
### Anything else we need to know?
Tested on 1.30.2
/sig bugs scheduling
### Kubernetes version
<details>
Tested on 1.30.2
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,needs-triage | low | Critical |
2,466,945,913 | flutter | ☂️ RPCError: <method name>: (-32000) Service connection disposed | **(added on 8/20) Summary for archaeologists from the future:** a couple of the changes to package:vm_service made it such that any pending requests are completed with `RPCError`s when the service is suddenly shut down (e.g. the user shut down their device during `flutter run`). Such errors should be captured by the tool and either discarded entirely or be captured and replaced by a helpful error message where appropriate.
(added on 11/22): Per https://github.com/flutter/flutter/issues/153471#issuecomment-2403337193, there probably are legitimate bug(s) here, but I am not sure if they still exist in the 3.27 beta.
____________
Tool crashes with messages of the form `RPCError: <RPC method name>: (-32000) Service connection disposed` have come to represent a significant number of tool crashes. On 3.24.0 to-date (8/24), 1870 crashes have been reported by 410-517 clients.
This exception comes with a few stack traces, hence this creating this umbrella issue.
- [ ] https://github.com/flutter/flutter/issues/153472 (affected 410 clients, but no stack trace)
* This one has no stack trace. I wonder if this means this must come from [`VMService::dispose`](https://github.com/dart-lang/sdk/blob/6c92babf4d3a256c15d98f126a5a6c30ed148c7a/pkg/vm_service/lib/src/vm_service.dart#L1739-L1745) since that callsite does not provide a stack trace in its `Completer::completeError` call.
- [ ] https://github.com/flutter/flutter/issues/153473 (only affected 7 clients)
- [ ] https://github.com/flutter/flutter/issues/153474 (affected 100 clients)
* Originates from [`VMService::_call`](https://github.com/dart-lang/sdk/blob/6c92babf4d3a256c15d98f126a5a6c30ed148c7a/pkg/vm_service/lib/src/vm_service.dart#L1769-L1773). I wonder if the tool is making vm service calls after the vm service has been disposed and what might be causing this.
Edit: some more instances since we now get stack traces on these:
- [ ] https://github.com/flutter/flutter/issues/154905
- [ ] https://github.com/flutter/flutter/issues/154906
- [ ] https://github.com/flutter/flutter/issues/154903 | c: crash,P1,team-tool,triaged-tool | medium | Critical |
2,466,992,928 | go | internal/trace: TestTraceGOMAXPROCS/Default failures | ```
#!watchflakes
default <- pkg == "internal/trace" && test == "TestTraceGOMAXPROCS/Default"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8739588635934945329)):
=== RUN TestTraceGOMAXPROCS/Default
trace_test.go:614: stderr: SIGBUS: bus error
PC=0x52590 m=4 sigcode=2 addr=0xc00001a1600002
goroutine 0 gp=0xc0003461c0 m=4 mp=0xc000342008 [idle]:
runtime.runqempty(...)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:6672
runtime.checkRunqsNoP({0xc00002c000?, 0x0?, 0x0?}, {0xc00000e12c?, 0x0?, 0x0?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:3741 +0x130 fp=0xc000359dd8 sp=0xc000359db8 pc=0x52590
runtime.findRunnable()
...
r20 0x0 r21 0x1
r22 0xc0003461c0 r23 0xc000359d38
r24 0xc000342008 r25 0x0
r26 0xffffffff13ad7807 r27 0xffffffffecca5c93
r28 0xbaf106b r29 0xea398
r30 0xffffffffffffffc0 r31 0x68216d32
pc 0x52590 link 0x51194
exit status 2
trace_test.go:616: exit status 1
--- FAIL: TestTraceGOMAXPROCS/Default (0.49s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,NeedsInvestigation,arch-loong64,compiler/runtime | low | Critical |
2,466,998,197 | flutter | Flutter needs to declare `abiFilters` for only supported architectures by default | ### Steps to reproduce
Flutter does not set `abiFilters` in its default `flutter create` template, which means that without intervention, apps published to the Play Store all declare that they will run on x86 (x86_32), which is not (and never will be) supported by Flutter.
Users with x86 devices who install the app from Google Play will be met with a crash, which is logged in Crashlytics as `java.lang.UnsatisfiedLinkError: dlopen failed: library "libflutter.so" not found`.
This issue is forked from #151638 at the request of @danagbemava-nc, who said:
> It appears there was an attempt to add this in https://github.com/flutter/flutter/pull/135529 but it was later reverted in https://github.com/flutter/flutter/pull/142089 and it must have dropped off everyone's radar. Can you file a new proposal for adding it?
### Expected results
Flutter should opt-in to all of its supported architectures by default using `ndk.abiFilters`.
### Actual results
Failing to set `abiFilters` means that the app can run on all architectures supported on Google Play, which simply does not reflect reality.
### Code sample
<details open><summary>Code sample</summary>
`android/app/build.gradle` should include something like this when `flutter create` is called:
```dart
android {
defaultConfig {
ndk.abiFilters 'armeabi-v7a', 'arm64-v8a', 'x86_64'
}
}
```
</details>
### Screenshots or Video
N/A
### Logs
N/A
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ flutter doctor -v
[✓] Flutter (Channel beta, 3.24.0-0.2.pre, on Fedora Linux 40 (Workstation Edition) 6.10.3-200.fc40.x86_64, locale en_US.utf8)
• Flutter version 3.24.0-0.2.pre on channel beta at /opt/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 7c6b7e9ca4 (2 weeks ago), 2024-07-30 14:26:44 +0700
• Engine revision 6e4deceb38
• Dart version 3.5.0 (build 3.5.0-323.2.beta)
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /home/luke/Android/Sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /opt/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• clang version 18.1.6 (Fedora 18.1.6-3.fc40)
• cmake version 3.28.2
• ninja version 1.12.1
• pkg-config version 2.1.1
[✓] Android Studio (version 2023.3)
• Android Studio at /opt/android-studio
• Flutter plugin version 79.0.2
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.92.1)
• VS Code at /usr/share/code
• Flutter extension version 3.95.20240801
[✓] VS Code (version 1.93.0-insider)
• VS Code at /usr/share/code-insiders
• Flutter extension version 3.91.20240529
[✓] Connected device (2 available)
• Linux (desktop) • linux • linux-x64 • Fedora Linux 40 (Workstation Edition) 6.10.3-200.fc40.x86_64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.119
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-android,tool,c: proposal,P2,team-android,triaged-android | low | Critical |
2,467,004,625 | pytorch | `init` of `PReLU()` works with `int`, `complex` and `bool` against what the doc says. | ### 🐛 Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html) of `PReLU()` says the type of `init` argument is `float` as shown below:
- init ([float](https://docs.python.org/3/library/functions.html#float)) – the initial value of a. Default: 0.25
But, `init` argument works with `int`, `complex` and `bool` against what the [The doc](https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html) says as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([-1., 0., 1.])
prelu = nn.PReLU(init=0)
prelu(input=my_tensor)
# tensor([-0., 0., 1.], grad_fn=<PreluKernelBackward0>)
prelu = nn.PReLU(init=0.+0.j)
prelu(input=my_tensor)
# tensor([-0., 0., 1.], grad_fn=<PreluKernelBackward0>)
prelu = nn.PReLU(init=False)
prelu(input=my_tensor)
# tensor([-0., 0., 1.], grad_fn=<PreluKernelBackward0>)
```
### Versions
```python
import torch
torch.__version__ # 2.3.1+cu121
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,467,079,346 | pytorch | Using `PReLU()` with a `complex` tensor and without `dtype=torch.complex64` should return a simple error message | ### 🚀 The feature, motivation and pitch
### Case1:
Using [PReLU()](https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html) with a `complex` tensor and `dtype=torch.complex64` got the simple error message as shown below:
```python
import torch
my_tensor = torch.tensor([-1.+0.j, 0.+0.j, 1.+0.j])
prelu = nn.PReLU(dtype=torch.complex64)
prelu(input=my_tensor) # Error
```
> RuntimeError: "prelu_cpu" not implemented for 'ComplexFloat'
### Case2:
But using `PReLU()` with a `complex` tensor and without `dtype=torch.complex64` got the non-simple error message as shown below:
```python
import torch
my_tensor = torch.tensor([-1.+0.j, 0.+0.j, 1.+0.j])
prelu = nn.PReLU()
prelu(input=my_tensor) # Error
```
> RuntimeError: prelu: Type promoting not supported. Got ComplexFloat and Float
### Alternatives
So for Case2, the simple error message of Case1 below should also be returned:
> RuntimeError: "prelu_cpu" not implemented for 'ComplexFloat'
So, there is no need to say the non-simple error message of Case2 below for Case 2:
> RuntimeError: prelu: Type promoting not supported. Got ComplexFloat and Float
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,467,091,963 | angular | Check if a http interceptor is provided | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
In my library, I expose a HTTP interceptor. since it is a deliberate design of the new API to require the user to add the interceptor manually in `withInterceptors` function (#51303,). there is a need to throw an exception or notify the user that he forgot to add the specific interceptor.
Related SO https://stackoverflow.com/questions/78869196/how-to-check-if-an-interceptor-is-provided-in-withinterceptors-fucntion?noredirect=1#comment139055699_78869196
How can this be accomplished?
### Proposed solution
An injection token that can be injected in a singleton service to check for the added Http interceptors. and can throw an error accordingly.
### Alternatives considered
Many libraries may include interceptors, and the new API design prevents authors from automatically adding the needed interceptors. there must be a way to notify the user for not adding an interceptor when he uses the library. | feature,help wanted,area: common/http,P4 | low | Critical |
2,467,154,022 | pytorch | When using a wrong value for `PReLU()`, a PReLU's error message should be returned directly instead of `empty()` error. | ### 🚀 The feature, motivation and pitch
Using a wrong value for [PReLU()](https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html) got an `empty()` error message as shown below:
```python
import torch
my_tensor = torch.tensor([-1., 0., 1.])
prelu = nn.PReLU(True) # Error
```
```
TypeError: empty() received an invalid combination of arguments - got (bool, dtype=NoneType, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
```
So, I set `requires_grad=True` to `PReLU()` because the `empty()` error message shows `requires_grad` argument, but I got a PReLU's error message as shown below:
```python
import torch
my_tensor = torch.tensor([-1., 0., 1.])
prelu = nn.PReLU(requires_grad=True) # Error
```
> TypeError: PReLU.__init__() got an unexpected keyword argument 'requires_grad'
### Alternatives
So, when using a wrong value for [PReLU()](https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html), a PReLU's error message should be returned directly instead of `empty()` error.
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,467,221,814 | vscode | Add an action to update all extensions if there are updates even if auto update is enabled | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
I cannot find a way to auto-update all extensions installed in VS Code. Each extension has an "Update" button beside it but there doesn't seem to be a way to update all of them at once -

The menu with the ... dots onthe right also doesn't have an Update all option -

<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.1 (user setup)
- OS Version: Windows 11

| feature-request,extensions | low | Critical |
2,467,228,881 | pytorch | `torch.jit.script` fails on a simple function | ### 🐛 Describe the bug
This simple function is a valid Python function, but `torch.jit.script` fails to compile.
Perhaps `torch.jit.script` failed to parse `1 ++ 2` correctly, causing the error.
Since `1 ++ 2` is parsed as `1 + (+2)`, the result is `3`.
``` python
import torch
@torch.jit.script
def sample():
return 1 ++ 2
```
```
Traceback (most recent call last):
File "C:\sample\bug.py", line 5, in <module>
def sample():
File "C:\sample\.venv\lib\site-packages\torch\jit\_script.py", line 1392, in script
ast = get_jit_def(obj, obj.__name__)
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 372, in get_jit_def
return build_def(
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 433, in build_def
return Def(Ident(r, def_name), decl, build_stmts(ctx, body))
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 195, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 195, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 406, in __call__
return method(ctx, node)
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 721, in build_Return
return Return(r, None if stmt.value is None else build_expr(ctx, stmt.value))
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 406, in __call__
return method(ctx, node)
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 950, in build_BinOp
rhs = build_expr(ctx, expr.right)
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 406, in __call__
return method(ctx, node)
File "C:\sample\.venv\lib\site-packages\torch\jit\frontend.py", line 976, in build_UnaryOp
expr.range(), "unsupported unary operator: " + op.__name__
AttributeError: 'UnaryOp' object has no attribute 'range'
```
### Versions
```
Collecting environment information...
PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 546.33
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2801
DeviceID=CPU0
Family=207
L2CacheSize=2560
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2801
Name=Intel(R) Core(TM) i9-10900E CPU @ 2.80GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] efficientnet_pytorch==0.7.1
[pip3] mypy==0.971
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] onnx==1.15.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxconverter-common==1.14.0
[pip3] onnxoptimizer==0.3.13
[pip3] onnxruntime-gpu==1.16.3
[pip3] torch==2.3.1+cu118
[pip3] torchvision==0.18.1+cu118
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,467,267,397 | tauri | [bug] visualViewport API fails to account for mobile keyboard height | ### Describe the bug
The `visualViewport` height in Tauri does not adjust correctly when the keyboard is shown or hidden, which affects the layout of websites. This issue is significant because many websites use `visualViewport` to adjust their layout dynamically, particularly to align tools and content above the on-screen keyboard.
### Reproduction
1. Use the code snippet below to set up a listener for viewport resizing:
```javascript
if (window.visualViewport) {
// bottomOffset = '30px'
viewportResizeUnsubscriber = createUnsubscribableListener(window, 'resize', () => {
positionBottom = `${window.innerHeight - window.visualViewport!.height}px`;
});
}
```
2. Open the application on an Android device or emulator with a Tauri environment.
3. Trigger the on-screen keyboard by focusing on an input field.
4. Observe that the visualViewport height does not change correctly, resulting in incorrect positioning of elements.
### Expected behavior
When the on-screen keyboard is shown or hidden, `window.visualViewport.height` should accurately reflect the visible portion of the viewport, allowing for proper adjustment of elements positioned above the keyboard.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 X64
✔ WebView2: 127.0.2651.98
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.80.0 (051478957 2024-07-21)
✔ cargo: 1.80.0 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.11.1
- pnpm: 8.15.5
- npm: 10.2.4
- bun: 1.1.8
[-] Packages
- tauri [RUST]: 2.0.0-beta.25
- tauri-build [RUST]: 2.0.0-beta.19
- wry [RUST]: 0.41.0
- tao [RUST]: 0.28.1
- @tauri-apps/api [NPM]: 2.0.0-beta.16
- @tauri-apps/cli [NPM]: 2.0.0-beta.23
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
This problem is related to [Tauri issue #9907](https://github.com/tauri-apps/tauri/issues/9907). | type: bug,status: needs triage | low | Critical |
2,467,336,199 | tauri | [bug] minSdkVersion 23 error | ### Describe the bug
pnpm tauri android dev
BUILD FAILED in 2s
Error Failed to assemble APK: command ["/Users/bigrocs/code/bigrocs/tauri/src-tauri/gen/android/gradlew", "--project-dir", "/Users/bigrocs/code/bigrocs/tauri/src-tauri/gen/android"] exited with code 1: command ["/Users/bigrocs/code/bigrocs/tauri/src-tauri/gen/android/gradlew", "--project-dir", "/Users/bigrocs/code/bigrocs/tauri/src-tauri/gen/android"] exited with code 1
ELIFECYCLE Command failed with exit code 1.
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.4.1 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 19.2.0
- pnpm: 7.18.1
- yarn: 1.22.17
- npm: 8.19.3
```
```
{
"productName": "gorocs",
"version": "0.0.1",
"identifier": "com.gorocs.dev",
"build": {
"beforeDevCommand": "pnpm dev",
"devUrl": "http://192.168.3.80:1420",
"beforeBuildCommand": "pnpm build",
"frontendDist": "../dist"
},
"app": {
"windows": [
{
"title": "gorocs",
"width": 800,
"height": 600
}
],
"security": {
"csp": null
}
},
"bundle": {
"active": true,
"targets": "all",
"icon": [
"icons/32x32.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.icns",
"icons/icon.ico"
],
"android": {
"minSdkVersion": 23
}
}
}
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: Android | low | Critical |
2,467,365,815 | PowerToys | [RegistryPreview] Rethink commandbar to increase density | ### Description of the new feature / enhancement


There are too many rounded corners in the page, and the segmentation is too messy. By aligning to the design file of FDS, referring to the first-party design such as Notepad, we can achieve-
The segmentation from edge to edge makes the design look simple, neat and crisp.
Match the first-party design to improve the unity.
Reserve the possibility of adding multi-tab support in the future.
### Scenario when this would be used?
-
### Supporting information
https://www.figma.com/community/file/1159947337437047524/windows-ui-3

According to the official design specification, the standard title bar-toolbar-content view is as shown in the figure. There is no separate toolbar rounded background.
You can find it in the official Figma file | Idea-Enhancement,Help Wanted,Area-User Interface,Cost-Small,Product-Registry Preview | low | Minor |
2,467,432,468 | go | x/term: unaware of unicode double width characters, creates phantom when for instance 乒 or 😀 is present in the history | ### Go version
go1.22.6
### Output of `go env` in your module/workspace:
```shell
same issue on macos (terminal or iterm2), linux or windows 11 (terminal.app)
```
### What did you do?
type or paste "😀, type backspace, a space is inserted
use up/down repeatedly previous prompt and current keep shifting right
### What did you see happen?
see above
### What did you expect to see?
😀 works like any other character | NeedsInvestigation | low | Minor |
2,467,442,666 | deno | add Uint8Array to/from base64 and hex | The tc39 [Uint8Array to/from base64 and hex](https://github.com/tc39/proposal-arraybuffer-base64) proposal is stage 3.
Support for this would significantly reduce boilerplate code.
Examples:
```js
let arr = new Uint8Array([72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100]);
console.log(arr.toBase64());
// 'SGVsbG8gV29ybGQ='
console.log(arr.toHex());
// '48656c6c6f20576f726c64'
```
```js
let string = 'SGVsbG8gV29ybGQ=';
console.log(Uint8Array.fromBase64(string));
// Uint8Array([72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100])
string = '48656c6c6f20576f726c64';
console.log(Uint8Array.fromHex(string));
// Uint8Array([72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100])
```
[Full spec](https://tc39.es/proposal-arraybuffer-base64/spec/). | feat,upstream | low | Minor |
2,467,470,868 | godot | Linux: Editor+OS+Mouse hangs for 5 seconds when closing a game via the stop button | ### Tested versions
4.2.2, v4.3.rc.custom_build [8e666adee]
### System information
Godot v4.3.rc (8e666adee) - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Sat Aug 3 10:24:35 UTC 2024 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 SUPER (nvidia; 550.107.02) - Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz (8 Threads)
### Issue description
When I close a running game from the editor via the UI stop button, my box (even the mouse) hangs for 4-6 seconds.
It does not hang when I close the game window through the OS (via the close button of the window).
Nothing in the logs (incl. command line with --verbose)
Uneducated guesses:
- Debugger?
- Problems with driver? Nvidia (NVIDIA - NVIDIA GeForce RTX 2060 SUPER)
- Audio?
### Steps to reproduce
Create a new project, save scene. Hit F6. Click STOP button in editor UI.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing,performance | low | Critical |
2,467,503,333 | PowerToys | Powertoys Run show incompletely | ### Microsoft PowerToys version
0.83.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Alt space and the run window shows as follows:
<img width="756" alt="Capture" src="https://github.com/user-attachments/assets/b2acb012-b3a0-4f7e-b558-8a71b0652ad7">
### ✔️ Expected Behavior
To see the run window
### ❌ Actual Behavior
<img width="756" alt="Capture" src="https://github.com/user-attachments/assets/266fab72-35a4-4a6c-9e4f-73bbcd8a45cc">
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Needs-Triage,Needs-Team-Response | low | Minor |
2,467,506,100 | react | [React 19] RSC and `as` property pattern | ## Summary
Many UI libraries adopt the `as` property pattern, especially for the Icon component.
```
<Icon as={ArrowIcon} />
```
This pattern is currently unsupported by server components, and requires to manually define client components for each icon one decides to use.
Is there a suggested alternative pattern the React team would like to promote? If not, do you think passing components as properties could be supported? In theory a component could be serialized (to a pointer) and then sent over HTTP, then the client could either dynamically import it, or the bundler could statically analyze it and bundle it on the client side code.
Alternatively, when a component is passed as property, the server component could inline the child component so that they are forced to be render together. | React 19 | low | Minor |
2,467,514,205 | material-ui | [docs] expand on sx performance tradeoff | ### Related page
https://mui.com/system/getting-started/usage/#performance-tradeoffs
### Kind of issue
Missing information
### Issue description
Helllo! Performance tradeoff page for sx prop lists a comparison between 4 ways of styling your components:
| Benchmark case |Code snippet | Time normalized |
|-----------------------|----------------|-------------|
| a. 1,000 primitives |`<div className="…">` |100ms |
| b. 1,000 components |`<Div>` | 112ms |
| c. 1,000 styled components | `<StyledDiv>`|181ms|
| d. 1,000 Box |`<Box sx={…}>`| 296ms |
I simply want to see 2 more cases:
e. `<Box p={...} m={..}>` (in other words, styled only with Box properties and I would hope that it's exactly the same as case `d`, but I don't know it.
f. This case, which somebody wondered before me, but nobody answered:
https://stackoverflow.com/questions/71481181/in-mui-v5-sx-prop-is-there-a-performance-difference-between-passing-an-object-li
```jsx
const sx = {p: 2};
const MyDiv = () => {
return <Box sx={sx}/>
}
```
So, the same as `d`, but with sx object lifted up, meaning it's hopefully isn't re-created 1000 times.
It would also be very nice to see what is the difference in terms of memory use. I tried using `window.performance.memory` and Memory tab on chrome myself (for about 5 minutes) but I failed to isolate this.
### Context
I sort of expect an argument in PR's at my place between `d` and `f` - `d` is inline, but `f` hopefully helps to fight this:

Thanks!
**Search keywords**: sx prop performance memory tradeoff | docs,performance,package: system,support: docs-feedback | low | Critical |
2,467,522,446 | pytorch | Floating point exception on H20 | ### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
mm = nn. Linear (1024, 18432, bias=True).cuda().bfloat16()
aa = torch. rand(2, 1024). cuda().bfloat16()
out = mm(aa)
```
I ran this code on H20 and had error: Floating point exception (core dumped).
### Versions
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Huawei Cloud EulerOS 2.0 (x86_64) (x86_64)
GCC version: (GCC) 10.3.1
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.0-136.12.0.86.r1526_92.hce2.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 176
On-line CPU(s) list: 0-175
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8458P
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 44
Socket(s): 2
Stepping: 8
BogoMIPS: 5400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves xfd cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.1 MiB (88 instances)
L1i cache: 2.8 MiB (88 instances)
L2 cache: 176 MiB (88 instances)
L3 cache: 165 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-43,88-131
NUMA node1 CPU(s): 44-87,132-175
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-lightning==2.3.0
[pip3] torch==2.3.0
[pip3] torchaudio==2.3.0
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.18.0
[pip3] triton==2.3.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] open-clip-torch 2.26.1 pypi_0 pypi
[conda] pytorch-lightning 2.3.0 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
cc @ptrblck @msaroufim | needs reproduction,module: cuda,triaged | low | Critical |
2,467,553,928 | rust | `--crate-type=rlib` + `-Cdebuginfo=2` + `-Zremap-cwd-prefix=.` is not reproducible on Windows | In https://github.com/rust-lang/rust/pull/128456#issuecomment-2289562440 we noticed that:
`--crate-type=rlib` + `-C debuginfo=2` + `-Z remap-cwd-prefix=.` on Windows caused the rlib to be unreproducible.
https://github.com/rust-lang/rust/blob/0f442e265c165c0a78633bef98de18517815150c/tests/run-make/reproducible-build/Makefile#L4-L5
Two source files:
1. https://github.com/rust-lang/rust/blob/0ba9db87e61adcfd9a978188f61c20d9b423a099/tests/run-make/reproducible-build/reproducible-build-aux.rs
2. https://github.com/rust-lang/rust/blob/0ba9db87e61adcfd9a978188f61c20d9b423a099/tests/run-make/reproducible-build/reproducible-build.rs
Steps:
- Let "root" test directory be called `$base_dir`.
- `rustc reproducible-build-aux.rs`
- `mkdir test`
- `cp reproducible-build.rs test/reproducible-build.rs`
- compiler 1: `rustc --crate-type=rlib -C debuginfo=2 -Zremap-cwd-prefix=. -L $cwd reproducible-build.rs`
- `mv libreproducible_build.rlib libfoo.rlib`
- `cd test`
- compiler 2: `rustc --crate-type=rlib -C debuginfo=2 -Zremap-cwd-prefix=. -L $base_dir --out-dir=$base_dir reproducible-build.rs`
- `cd $base_dir`
- check if `libreproducible_build.rlib` and `libfoo.rlib` are different
Marking as `S-needs-repro` as I'm not sure of the root cause or exact reproduction environment; the test case failed on `x86_64-msvc` ci job. I'm also not exactly sure of the intended semantics of `-Z remap-cwd-prefix=.`.
| A-testsuite,O-windows,T-compiler,C-bug,A-reproducibility,S-needs-repro | low | Critical |
2,467,630,032 | pytorch | Dtensor shard uses more gpu memory than raw tensor | ### 🐛 Describe the bug
Dtensor shard uses more gpu memory than raw tensor.
With test, Shard gpu mem: 21890MiB > Replicate gpu mem: 17448MiB > Raw tensor gpu mem: 16804MiB.
Confused for a long time.
```python
# torchrun --nproc_per_node=4 test_dtensor.py
import os
import torch
import torch.distributed as dist
from torch.distributed._tensor import DTensor, Shard, Replicate, distribute_tensor, distribute_module, init_device_mesh
mesh = init_device_mesh("cuda", (int(os.environ["WORLD_SIZE"]),))
def test_raw_tensor():
big_tensor = torch.randn((4, 1024*1024*1024)).to("cuda")
def test_shard():
big_tensor = torch.randn((4, 1024*1024*1024))
my_dtensor = distribute_tensor(big_tensor, mesh, [Shard(dim=0)])
def test_replicate():
big_tensor = torch.randn((4, 1024*1024*1024))
my_dtensor = distribute_tensor(big_tensor, mesh, [Replicate()])
# test_raw_tensor()
test_shard()
# test_replicate()
print("wait for gpu inspect...")
import time
time.sleep(60)
```
### Versions
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.134-13.al8.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.161.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-123
Off-line CPU(s) list: 124-127
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3499.994
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l | oncall: distributed,triaged,module: dtensor | low | Critical |
2,467,658,017 | go | syscall: TestExecPtrace failures | ```
#!watchflakes
default <- pkg == "syscall" && test == "TestExecPtrace"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8739559683359464513)):
=== RUN TestExecPtrace
panic: test timed out after 27m0s
running tests:
TestExecPtrace (26m58s)
goroutine 41 gp=0xc000084c40 m=3 mp=0xc000080008 [running]:
panic({0x171740?, 0xc00011c5b0?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:804 +0x18c fp=0xc000112ee0 sp=0xc000112e20 pc=0x8574c
testing.(*M).startAlarm.func1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:2373 +0x334 fp=0xc000112fc0 sp=0xc000112ee0 pc=0xffc54
...
runtime.gopark(0xc000050798?, 0x2?, 0x0?, 0x0?, 0xc000050788?)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:435 +0x114 fp=0xc0000505d8 sp=0xc0000505a8 pc=0x85af4
runtime.selectgo(0xc000050798, 0xc000050784, 0x0?, 0x0, 0x0?, 0x1)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/select.go:335 +0x774 fp=0xc000050730 sp=0xc0000505d8 pc=0x60a04
runtime.ensureSigM.func1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/signal_unix.go:1060 +0x1ac fp=0xc0000507c0 sp=0xc000050730 pc=0x7d86c
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_ppc64x.s:1022 +0x4 fp=0xc0000507c0 sp=0xc0000507c0 pc=0x8dea4
created by runtime.ensureSigM in goroutine 35
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/signal_unix.go:1043 +0x118
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,OS-OpenBSD,NeedsInvestigation,compiler/runtime | low | Critical |
2,467,660,230 | pytorch | fx qat symmetric config zero_point not zero | ### 🐛 Describe the bug
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torchvision.models import resnet18
from torch.ao.quantization import (
get_default_qconfig_mapping,
get_default_qat_qconfig_mapping,
QConfigMapping,
)
from torch.ao.quantization.qconfig_mapping import _get_symmetric_qnnpack_qat_qconfig_mapping
import torch.ao.quantization.quantize_fx as quantize_fx
from torch.ao.quantization import QConfig, default_observer, default_per_channel_weight_observer
from torch.ao.quantization.observer import (
MinMaxObserver,
MovingAverageMinMaxObserver,
MovingAveragePerChannelMinMaxObserver,
_PartialWrapper,
default_fixed_qparams_range_0to1_observer,
default_fixed_qparams_range_neg1to1_observer,
default_weight_observer,
default_placeholder_observer,
)
from torch.ao.quantization.fake_quantize import (
FusedMovingAvgObsFakeQuantize,
default_weight_fake_quant,
FixedQParamsFakeQuantize,
)
import onnx
import copy
from torch.ao.quantization.backend_config import (
BackendConfig,
BackendPatternConfig,
DTypeConfig,
ObservationType,
get_qnnpack_backend_config,
)
from torch.ao.quantization.qconfig import (
default_reuse_input_qconfig,
default_per_channel_symmetric_qnnpack_qat_qconfig,
QConfigAny
)
from typing import Any, Callable, Dict, Tuple, Union, List
from tqdm import tqdm
#build model, using ResNet18 on CIFAR10 Dataset
class CIFAR10ResNet(nn.Module):
def __init__(self, num_classes=10):
super(CIFAR10ResNet, self).__init__()
resnet = resnet18(pretrained=True)
resnet.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
resnet.fc = nn.Linear(resnet.fc.in_features, num_classes)
self.resnet = resnet
def forward(self, x):
return self.resnet(x)
#build dataset
transform = transforms.Compose([
transforms.Resize((32, 32)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_dataset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
model = CIFAR10ResNet()
#build example input
dummy_input = torch.randn(1, 3, 32, 32)
for i, (image, _) in tqdm(enumerate(train_loader), total= 1):
dummy_input = image.cuda()
if i >= 1:
break
model_to_quantize = copy.deepcopy(model)
#get backend, it is just the default backend config, I did not change anything
backend_config_s = get_qnnpack_backend_config()
#get qconfig_mapping, I wan to use _get_symmetric_qnnpack_qat_qconfig_mapping because I do not want quint8, I need int8
qconfig_mapping = _get_symmetric_qnnpack_qat_qconfig_mapping()
#model_prepared
model_prepared = quantize_fx.prepare_qat_fx(model_to_quantize, qconfig_mapping, dummy_input, backend_config = backend_config_s)
#training
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_prepared = model_prepared.to(device)
model_prepared.train()
optimizer = optim.SGD(model_prepared.parameters(), lr=0.01, momentum=0.9)
criterion = nn.CrossEntropyLoss()
epochs = 2
for epoch in range(epochs):
for inputs, labels in train_loader:
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model_prepared(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Loss: {loss.item()}')
#convert to onnx
device = torch.device( "cpu")
model_prepared = model_prepared.to(device)
model_quantized = quantize_fx.convert_fx(model_prepared)
dummy_input = dummy_input.to(device)
model_quantized.eval()
torch.onnx.export(model_quantized, dummy_input, "quantized_model.onnx",
verbose = False,
input_names = ['input'],
output_names = ['output'],
)
### Versions
absl-py 2.1.0
albucore 0.0.13
albumentations 1.3.1
annotated-types 0.7.0
certifi 2024.7.4
charset-normalizer 3.3.2
coloredlogs 15.0.1
contourpy 1.1.1
cycler 0.12.1
eval_type_backport 0.2.0
filelock 3.13.1
flatbuffers 24.3.25
fonttools 4.53.1
fsspec 2024.2.0
humanfriendly 10.0
idna 3.7
imageio 2.34.2
imgaug 0.4.0
importlib_resources 6.4.0
Jinja2 3.1.3
joblib 1.4.2
jsonpatch 1.33
jsonpointer 3.0.0
kiwisolver 1.4.5
lazy_loader 0.4
Mako 1.3.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.7.5
mdurl 0.1.2
mpmath 1.3.0
munch 4.0.0
networkx 3.0
numpy 1.24.4
nvidia-cublas-cu11 11.11.3.6
nvidia-cuda-cupti-cu11 11.8.87
nvidia-cuda-nvrtc-cu11 11.8.89
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cuda-runtime-cu12 12.6.37
nvidia-cudnn-cu11 9.1.0.70
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.3.0.86
nvidia-cusolver-cu11 11.4.1.48
nvidia-cusparse-cu11 11.7.5.86
nvidia-nccl-cu11 2.20.5
nvidia-nvtx-cu11 11.8.86
nvidia-pyindex 1.0.9
onnx 1.16.2
onnxruntime 1.18.1
onnxruntime-gpu 1.18.1
onnxsim 0.4.36
opencv-python 4.10.0.84
opencv-python-headless 4.10.0.84
packaging 24.1
pillow 10.2.0
pip 24.0
platformdirs 4.2.2
pretrainedmodels 0.7.4
prettytable 3.11.0
protobuf 5.27.3
pycuda 2024.1.2
pydantic 2.8.2
pydantic_core 2.20.1
Pygments 2.18.0
pyparsing 3.1.2
python-dateutil 2.9.0.post0
pytools 2024.1.13
pytorch-quantization 2.1.3
PyWavelets 1.4.1
PyYAML 6.0.2
qudida 0.0.4
requests 2.32.3
rich 13.7.1
scikit-image 0.21.0
scikit-learn 1.3.2
scipy 1.10.1
setuptools 72.1.0
shapely 2.0.5
six 1.16.0
sphinx_glpi_theme 0.6
sympy 1.12
tensorrt 10.3.0
tensorrt-cu12 10.3.0
tensorrt-cu12-bindings 10.3.0
tensorrt-cu12-libs 10.3.0
threadpoolctl 3.5.0
tifffile 2023.7.10
tomli 2.0.1
torch 2.4.0+cu118
torch2trt 0.5.0
torchaudio 2.4.0+cu118
torchnet 0.0.4
torchvision 0.19.0+cu118
tornado 6.4.1
tqdm 4.66.5
triton 3.0.0
typing_extensions 4.12.2
urllib3 2.2.2
visdom 0.2.4
wcwidth 0.2.13
websocket-client 1.8.0
wheel 0.43.0
zipp 3.19.2
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization | low | Critical |
2,467,741,944 | godot | "vkCreateGraphicsPipelines failed with error -3" errors after OS upgrade. | ### Tested versions
4.2.2
### System information
MacOS 12.7.6, NVIDIA GeForce GTX 680 2 GB (Metal supported), Forward+
### Issue description
After upgrading from MacOS 10.15 (Catalina) to 12.7.6 (Monterey), I'm getting these errors in Godot:
> vkCreateGraphicsPipelines failed with error -3 for shader 'SceneForwardClusteredShaderRD:9'.
servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61 - Condition "pipeline.is_null()" is true. Returning: RID()
This render pipeline requires (0) bytes of push constant data, supplied: (16)
No render pipeline was set before attempting to draw.
vkCreateGraphicsPipelines failed with error -3 for shader 'SceneForwardClusteredShaderRD:9'.
servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61 - Condition "pipeline.is_null()" is true. Returning: RID()
This render pipeline requires (0) bytes of push constant data, supplied: (16)
No render pipeline was set before attempting to draw.
And 3D scenes are not rendered anymore.
The hardware is unchanged, but I suspect the drivers are different since the OS was upgraded.
Is there something I can try to fix this before having to downgrade to my previous setup?
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | bug,platform:macos,topic:rendering | low | Critical |
2,467,743,419 | excalidraw | Text breaks out of shape when pasting in left aligned mode | 
| text wrapping | low | Minor |
2,467,747,661 | storybook | Introduce portable stories support for Web components | When running portable stories tests with a setup file that does:
```ts
import { setProjectAnnotations } from "@storybook/web-components"
setProjectAnnotations([])
```
the `renderToCanvas` annotation is used, which is the most important bit that provides PS support for a renderer.
This issue encompasses:
- Checking how much effort it actually is to add PS support for Web Components
- Seeing how feasible are the tests (because of shadow-dom)
- Updating the storybook add postinstall script to allow installation for this renderer
- Updating docs | feature request,web-components,portable stories | low | Minor |
2,467,753,655 | pytorch | [Optim][Dynamo] Tensor unproperly assigned in Adagrad optimizer in dynamo | ### 🐛 Describe the bug
I am testing the function of optimizer using torch dynamo, I found that there is a small problem in Adagrad, **state["step"]** was assigned to CPU while other parameters are assigned to CUDA. And I tested other models, only Adagrad behaves like that. You can see the detailed analyse below in **Error logs**
You can find the state of "step" under torch/optim/adagrad.py from master in line 97
```
for group in self.param_groups:
for p in group["params"]:
state = self.state[p]
state["step"] = (
torch.zeros(
(),
dtype=_get_scalar_dtype(is_fused=group["fused"]),
device=p.device,
)
if group["fused"]
else torch.tensor(0.0, dtype=_get_scalar_dtype()) # LINE 97
)
init_value = (
complex(initial_accumulator_value, initial_accumulator_value)
if torch.is_complex(p)
else initial_accumulator_value
)
state["sum"] = torch.full_like(
p, init_value, memory_format=torch.preserve_format
)
```
It looks like so in 2.3.1:
```
for group in self.param_groups:
for p in group["params"]:
state = self.state[p]
state["step"] = torch.tensor(0.0, dtype=_get_scalar_dtype()) # Here
init_value = (
complex(initial_accumulator_value, initial_accumulator_value)
if torch.is_complex(p)
else initial_accumulator_value
)
state["sum"] = torch.full_like(
p, init_value, memory_format=torch.preserve_format
)
```
Here is the similar code in ASGD in pytorch 2.3.1:
```
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault("foreach", None)
group.setdefault("maximize", False)
group.setdefault("differentiable", False)
group.setdefault("capturable", False)
for p in group["params"]:
p_state = self.state.get(p, [])
if len(p_state) != 0:
if not torch.is_tensor(p_state['step']):
step_val = float(p_state["step"])
p_state["step"] = torch.tensor(step_val, dtype=_get_scalar_dtype(), device=p.device) #Similar code
if not torch.is_tensor(p_state["eta"]):
p_state["eta"] = torch.tensor(p_state["eta"], dtype=_get_scalar_dtype(), device=p.device)
if not torch.is_tensor(p_state["mu"]):
p_state["mu"] = torch.tensor(p_state["mu"], dtype=_get_scalar_dtype(), device=p.device)
```
You can find that there is no device info in Adagrad.
To fix it, just add p.device info.
```
p_state["step"] = torch.tensor(step_val, dtype=_get_scalar_dtype(), device=p.device)
s["step"] = torch.tensor(float(s["step"]), dtype=_get_scalar_dtype(), device=p.device)
```
You can see the analyse below, if you agree with my change, I will update a PR (from pytorch 2.0 to master) to change the code later.
### Error logs
**LOGS of Adagrad, from the _DispatchKeySet(CPU...)_ in the last 2 lines, you can find that the scalar 'step' was assigned to CPU, others were assigned to CUDA:**
```
V0815 10:33:35.436159 140588284983104 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].param_groups[0]['params'][0], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=True, size=[2, 2], stride=[2, 1]) # has_sparse_grad, has_complex = self._init_group(group, params_with_grad, grads, state_sums, state_steps) # optim/adagrad.py:119 in step```
...
V0815 10:33:35.441536 140588284983104 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[0]]['sum'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[2, 2], stride=[2, 1]) # has_sparse_grad, has_complex = self._init_group(group, params_with_grad, grads, state_sums, state_steps) # optim/adagrad.py:119 in step```
...
V0815 10:33:35.443946 140588284983104 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[0]]['step'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[], stride=[]) # has_sparse_grad, has_complex = self._init_group(group, params_with_grad, grads, state_sums, state_steps) # optim/adagrad.py:119 in step```
...
V0815 10:33:35.446371 140588284983104 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[8]]['step'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[], stride=[]) # has_sparse_grad, has_complex = self._init_group(group, params_with_grad, grads, state_sums, state_steps) # optim/adagrad.py:119 in step
V0815 10:33:35.446728 140588284983104 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[9]]['step'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[], stride=[]) # has_sparse_grad, has_complex = self._init_group(group, params_with_grad, grads, state_sums, state_steps) # optim/adagrad.py:119 in step
```
**LOGS of ASGD,you can find that the scalar 'step' was assigned to CUDA:**
```
V0815 11:12:47.959811 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].param_groups[0]['params'][1], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=True, size=[2, 2], stride=[2, 1]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
...
V0815 11:12:47.962426 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].param_groups[0]['params'][0].grad, Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[2, 2], stride=[2, 1]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
...
V0815 11:12:47.964913 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[0]]['ax'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[2, 2], stride=[2, 1]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
V0815 11:12:47.965153 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[0]]['mu'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[], stride=[]) # has_complex =
...
V0815 11:12:47.969633 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[0]]['eta'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[], stride=[]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
...
V0815 11:12:47.971733 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[9]]['eta'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[], stride=[]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
V0815 11:12:47.971964 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[0]]['step'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[], stride=[]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
...
V0815 11:12:47.974065 139832451274560 torch/_dynamo/guards.py:1085] [2/1] [__guards] check_tensor(L['self'].state[list(L['self'].state.keys())[9]]['step'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[], stride=[]) # has_complex = self._init_group(group, params_with_grad, grads, mus, axs, etas, state_steps) # optim/asgd.py:117 in step
```
Personally I think that ASGD is the right implementation, although there is no error in adagrad now, it does not behave as expected.
### Minified repro
```
import torch
import torch.nn as nn
import torch.optim as optim
import logging
torch._logging.set_logs(dynamo = logging.DEBUG, inductor = logging.DEBUG, bytecode = True, aot_graphs = True)
# Set Manual Seed
torch.manual_seed(seed=2024)
torch.cuda.manual_seed(seed=2024)
class SimpleLinearModel(nn.Module):
def __init__(self, input_size, output_size):
super(SimpleLinearModel, self).__init__()
self.linear = nn.Linear(input_size, output_size)
def forward(self, x):
return self.linear(x)
input_size = 10
output_size = 1
model_graph = SimpleLinearModel(input_size, output_size).cuda()
criterion_graph = nn.MSELoss().cuda()
inputs_graph = torch.randn(5, input_size).cuda()
targets_graph = torch.randn(5, output_size).cuda()
optimizer_graph = optim.Adagrad(model_graph.parameters(), lr=0.01,foreach=False)
def opt_eager():
optimizer_graph.step()
opt_graph = torch.compile(opt_eager, backend='inductor', fullgraph=False)
for epoch in range(20):
print(f">> Epoch {epoch+1} begin")
# graph optimizer
optimizer_graph.zero_grad()
outputs_graph = model_graph(inputs_graph)
loss_g = criterion_graph(outputs_graph, targets_graph)
loss_g.backward()
opt_graph() # opt_eager()
epoch_graph_loss = round(loss_g.item(), 4)
print(f'>>>>> Epoch [{epoch+1}/20], graph optimizer loss: {loss_g.item():.4f}')
```
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
CPU max MHz: 3100.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.3.1+cu118
[pip3] triton==2.3.1
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.3.1+cu118 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @mlazos | module: optimizer,triaged,oncall: pt2,module: dynamo,release notes: optim,module: pt2 optimizer | low | Critical |
2,467,768,211 | flutter | [WEB] FlutterView not notified about aspect ratio and devicePixelRatio changes in safari | ### Steps to reproduce
1. Create and run any flutter web app (counter example in my case)
2. Open it in safari
3. move the safari window from mac monitor to second monitor with bigger pixelRatio
(This step can be also reproduced usong safari dev tools. Open adaptive design, and change pixelRatio to 2x or 3x
5. Trigger ui rebuild (in my case by hovering FAB)
### Expected results
Nothing unusual happens, app looks same as on macbook monitor
### Actual results
Looks like that flutter is not notified about pixelRatio change, when safari window is moved to external monitor. When window is moved and ui rebuild is triggered UI is stretched. Same code sample in chrome works fine, logs in debug console are fired and Ui is not getting stretched.
This bug can be reproduced on safari 17.6 (19618.3.11.11.5) (latest),
17.5 (18618.2.12.111.5, 18618) and 16.6.1 (I wasn't able to test on other versions)
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return LayoutBuilder(
builder: (c, size) {
print("Size: ${MediaQuery.of(context).size.width},${MediaQuery.of(context).size.height} ");
print("aspectRatio: ${MediaQuery.of(context).size.width},${MediaQuery.of(context).size.aspectRatio} ");
print("devicePixelRatio: ${MediaQuery.of(context).devicePixelRatio}");
print("-----------------------------------------------------");
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
},
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[7723db4d-7aff-0d0e-0771-a5641bddd151_custom.mp4.zip](https://github.com/user-attachments/files/16622969/7723db4d-7aff-0d0e-0771-a5641bddd151_custom.mp4.zip)

https://github.com/user-attachments/assets/2ecac2ee-e390-4acd-9d68-2c4a0487c02b
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Log] Size: 1081,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1081,3.2658610271903323 (main.dart.js, line 17526)
[Log] devicePixelRatio: 1 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
[Log] Size: 1075,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1075,3.2477341389728096 (main.dart.js, line 17526)
[Log] devicePixelRatio: 2 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
[Log] Size: 1074,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1074,3.2447129909365557 (main.dart.js, line 17526)
[Log] devicePixelRatio: 2 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
[Log] Size: 1080,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1080,3.2628398791540785 (main.dart.js, line 17526)
[Log] devicePixelRatio: 1 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
[Log] Size: 1081,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1081,3.2658610271903323 (main.dart.js, line 17526)
[Log] devicePixelRatio: 1 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
[Log] Size: 1082,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1082,3.268882175226586 (main.dart.js, line 17526)
[Log] devicePixelRatio: 1 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
[Log] Size: 1083,331 (main.dart.js, line 17526)
[Log] aspectRatio: 1083,3.27190332326284 (main.dart.js, line 17526)
[Log] devicePixelRatio: 1 (main.dart.js, line 17526)
[Log] ----------------------------------------------------- (main.dart.js, line 17526)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on macOS 11.7.10 20G1427 darwin-x64, locale uk-UA)
• Flutter version 3.24.0 on channel stable at /Users/pm/Desktop/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (2 weeks ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
flutter config --android-sdk to update to that location.
[!] Xcode - develop for iOS and macOS (Xcode 13.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 13C100
✗ Flutter requires Xcode 14 or higher.
Download the latest version or update via the Mac App Store.
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see
https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
[✓] VS Code (version 1.92.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.95.20240801
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 11.7.10 20G1427 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.119
[✓] Network resources
• All expected network resources are available.
```
</details>
| platform-web,has reproducible steps,browser: safari-macos,P2,team-web,triaged-web,found in release: 3.24 | low | Critical |
2,467,771,475 | next.js | Error: You cannot use both an required and optional catch-all route | ### Link to the code that reproduces this issue
https://github.com/imCorfitz/next-modal-issue-example
### To Reproduce
1. Set up a Next.js app using intercepted parallel routes as per the example: https://github.com/vercel/nextgram
2. Assume you are building a site fully managed from a headless CMS, and move the root `page.tsx` file to a catch-all route e.g. `app/[[...params]]/page.tsx`
3. Create a `page.tsx` file in `app/photos` folder `export default function Page() { return <div>All Photos here</div>; }` (we could also add params logic to the catch all route, but this is simpler for testing).
4. Add a link to the modal linking to `all photos` - `<Link href="/photos'>All photos</Link>`
5. Test link.. Result: Modal remains open when navigating to the photos route.
6. Read and follow documentation telling us to create a `[...catchall]` route in the `@modal` directory, with a page that returns `null`
7. See error: `Failed to reload dynamic routes: Error: You cannot use both an required and optional catch-all route at the same level ("[...catchall]" and "[[...params]]" )`.
### Current vs. Expected behavior
I believe this is to be expected - however an alternative approach or solution to closing the modal on navigation is needed in that case.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 22.5.1
npm: 10.8.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
_No response_ | bug,Parallel & Intercepting Routes | low | Critical |
2,467,775,317 | react | [React 19] `use()` promise from state causes "async/await is not yet supported in Client Components" error | ## Summary
https://stackblitz.com/edit/vitejs-vite-kbccdh?file=src%2FApp.tsx
```tsx
import { use, useState } from 'react';
export default function App() {
const [p] = useState(() => new Promise((res) => setTimeout(res, 500)));
use(p);
return 'hello!';
}
```
Note: I do not have a parent Suspense boundary.
### Actual behaviour
An error is thrown
> Error: async/await is not yet supported in Client Components, only Server Components. This error is often caused by accidentally adding `'use client'` to a module that was originally written for the server.
### Expected behaviour
I think this should render "hello!". If I lift the promise out of the state initialiser into the module, there is no error - React holds from rendering the app until the promise resolves.
```tsx
// This works fine, even without a parent <Suspense>
const p = new Promise((res) => setTimeout(res, 500));
export default function App() {
use(p);
return 'hello!';
}
```
If it's expected to throw an error in this scenario, then I think the error message should be improved. It should log the error message about the component suspended without a parent suspense boundary. I do think this is inconsistent though, and in original example should just work. | Type: Bug,React 19 | low | Critical |
2,467,781,902 | next.js | Incorrect caching | ### Link to the code that reproduces this issue
https://github.com/mastoj/more-cache-experiments
### To Reproduce
1. Build app and start
2. Open http://localhost:3000/cache/a/b/c and http://localhost:3000/revalidate in two different tabs (tab A and B)
3. Refresh tab A over and over again until the first time stamp changes, which is as expected
4. Wait 5 seconds and click "Revalidate A" in tab B
5. Go back to refreshing tab A over and over again until both timestamp changes
(you can test on the deployed version here https://more-cache-experiments.vercel.app/cache/a/b, https://more-cache-experiments.vercel.app/revalidate)
### Current vs. Expected behavior
### Current behavior:
What happens when you click "Revalidate A" seems to be that the next time the page will be revalidated is based on the revalidation time only on the "refreshed" fetch calls. This leads to the second timestamp being updated later than it should be. This could lead to weird scenarios when having long revalidation times I think.
### Expected behavior:
A little bit tricky, but I think the expected behavior should be to set a new revalidation time for the page to be the lowest of the times for newly calculated value and the existing one, but I'm not 100% that is correct.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
Available memory (MB): 65536
Available CPU cores: 12
Binaries:
Node: 18.17.0
npm: 10.8.2
Yarn: 1.22.21
pnpm: 9.1.0
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next start (local), Vercel (Deployed)
### Additional context
_No response_ | bug | low | Minor |
2,467,784,297 | storybook | Introduce portable stories support for HTML | When running portable stories tests with a setup file that does:
```ts
import { setProjectAnnotations } from "@storybook/html"
setProjectAnnotations([])
```
the `renderToCanvas` annotation is used, which is the most important bit that provides PS support for a renderer.
This issue encompasses:
- Checking how much effort it actually is to add PS support for HTML
- Updating the storybook add postinstall script to allow installation for this renderer
- Updating docs | feature request,html,portable stories | low | Minor |
2,467,784,508 | storybook | Introduce portable stories support for Angular | When running portable stories tests with a setup file that does:
```ts
import { setProjectAnnotations } from "@storybook/angular"
setProjectAnnotations([])
```
the `renderToCanvas` annotation is used, which is the most important bit that provides PS support for a renderer.
This issue encompasses:
- Checking how much effort it actually is to add PS support for Angular (with analogjs)
- Updating the storybook add postinstall script to allow installation for this renderer
- Updating docs | feature request,angular,portable stories | low | Major |
2,467,784,717 | storybook | Introduce portable stories support for Preact | When running portable stories tests with a setup file that does:
```ts
import { setProjectAnnotations } from "@storybook/preact"
setProjectAnnotations([])
```
the `renderToCanvas` annotation is used, which is the most important bit that provides PS support for a renderer.
This issue encompasses:
- Checking how much effort it actually is to add PS support for Preact
- Updating the storybook add postinstall script to allow installation for this renderer
- Updating docs | feature request,preact,portable stories | low | Minor |
2,467,792,308 | pytorch | torch quantize error | ### 🐛 Describe the bug
```python
import torch
t = torch.tensor(torch.inf, dtype=torch.float32)
qt = torch.quantize_per_tensor(t, 1, 133, torch.quint8)
r = torch.int_repr(qt)
print(r) # tensor(0, dtype=torch.uint8)
```
In the code above, **torch.inf** is computed by other operators. When quantization is performed at this point, the result will be 0. However, the correct result should be 255, because inf at this point has reached the upper bound of quantization, not the lower bound. The result can only be 0 when the quantization input parameter is **-torch.inf**.
### Versions
torch==2.1.2
using cpu backend qnnpack
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization | low | Critical |
2,467,812,211 | tauri | [bug] pointerType always falls back to "mouse" using the pointerdown event. | ### Describe the bug
When using the pointerdown event in Javascript, the pointerType property is always set to "mouse" instead of "pen" or "touch" respectively. This also goes for the pressure property that's always set to 0.5, making it imposible to distinguish between a mouse, a pen and touch input.
I already tested the functionality of this event on other browsers to discard it's a problem with my Linux set up and the event fired just fine, and the property was correct.
### Reproduction
Add the event listener "pointerdown" and log the output of the event.pointerType property.
This bug was found on Arch Linux on the Cinnamon desktop environment under Wayland.
### Expected behavior
When using an active stylus pen, returning the "pen" property and the "touch" property for a touchscreen.
### Full `tauri info` output
```text
[⚠] Environment
- OS: Arch Linux Unknown X64
✔ webkit2gtk-4.0: 2.44.2
✔ rsvg2: 2.58.2
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06) (Arch Linux rust 1:1.80.1-1)
✔ cargo: 1.80.1 (376290515 2024-07-16)
⚠ rustup: not installed!
If you have rust installed some other way, we recommend uninstalling it
then use rustup instead. Visit https://rustup.rs/
⚠ Rust toolchain: couldn't be detected!
Maybe you don't have rustup installed? if so, Visit https://rustup.rs/
- node: 22.6.0
- yarn: 1.22.22
- npm: 10.8.2
- bun: 1.1.24
[-] Packages
- tauri [RUST]: 1.7.1
- tauri-build [RUST]: 1.5.3
- wry [RUST]: 0.24.10
- tao [RUST]: 0.16.9
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.0
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: upstream,platform: Linux | low | Critical |
2,467,832,030 | go | proposal: net/http: maximum size and number of parts in ParseMultipartForm | ### Proposal Details
Please provide a way to limit the maximum size and number of parts when using `http.ParseMultipartForm`.
`http.ParseMultipartForm` is handy when the application knows that the files being uploaded are small; in that case, one does not need to go through the hassle of `http.MultipartReader`. However, there appears to be no way to cause `http.ParseMultipartForm` to reject the upload if the parts are larger than a given size (say, a megabyte), and no way to reject posts with more than a given number of parts (say, 10).
I propose the addition of the following function (name to be reconsidered):
```
func (r *Request) ParseMultipartFormLimited(maxMemory int64, maxPartSize int64, maxParts int) error
```
This is just like `ParseMultipartForm`, except that:
* if `maxPartSize` is strictly larger than 0, then any of the parts is larger than `maxPartSize` bytes, the function returns `http.ErrMessageTooLarge`;
* if `maxParts` is strictly larger than 0, then if there are more than `maxParts` parts the function returns `http.ErrMessageTooLarge`.
If the function returns `http.ErrMesageTooLarge`, then the body of the request has been closed. | Proposal | low | Critical |
2,467,842,413 | PowerToys | 未找到命令 | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce

### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,467,843,751 | angular | @angular/forms email validator doesn't support internationalized addresses | ### Which @angular/* package(s) are the source of the bug?
forms
### Is this a regression?
No
### Description
Email validator from `@angular/forms` incorrectly validates internationalized email addresses like `vrána@vřeští.eu` or `маша@пошта.рф` as invalid.
It is described as an implementation of the WHATWG HTML specification (with some minor adjustments), but contrary to the specification, it doesn't convert the internationalized email address to punycode before validating it against the regex.
> User agents may transform the [value](https://html.spec.whatwg.org/multipage/form-control-infrastructure.html#concept-fe-value) for display and editing; in particular, user agents should convert punycode in the domain labels of the [value](https://html.spec.whatwg.org/multipage/form-control-infrastructure.html#concept-fe-value) to IDN in the display and vice versa.
https://html.spec.whatwg.org/multipage/input.html#email-state-(type=email)
While the validator is not a user agent per se, its behavior should be as close as possible to the browsers (at least that's my expectation), and all the major browsers correctly implement the behavior of converting internationalized addresses to punycode before validation.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
_No response_
### Anything else?
_No response_ | area: forms,forms: validators | low | Critical |
2,467,856,543 | tauri | [bug] [tauri v2][iOS] Xcode project directory is outdated because you have modified your "identifier" in the Tauri configuration. Please run `tauri ios init` and try again. | ### Describe the bug
Running **npm run tauri ios init** does not update xcode project directory.
**npm run tauri ios dev 'iPhone 15'** keeps throwing the same error:
Error Xcode project directory is outdated because you have modified your "identifier" in the Tauri configuration. Please run `tauri ios init` and try again.
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
> relative_motion@1.0.67 tauri
> tauri info
[✔] Environment
- OS: Mac OS 14.6.1 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.11.0
- npm: 10.2.4
[-] Packages
- tauri [RUST]: 2.0.0-rc.2
- tauri-build [RUST]: 2.0.0-rc.2
- wry [RUST]: 0.41.0
- tao [RUST]: 0.28.1
- @tauri-apps/api [NPM]: 2.0.0-rc.0
- @tauri-apps/cli [NPM]: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: <removed>
- frontendDist: ../src
- bundler: Webpack
[-] iOS
- Developer Teams: <Removed>
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: iOS | low | Critical |
2,467,860,798 | rust | Run merged doc-tests by default in the same process and add opt-out attribute | https://github.com/rust-lang/rust/pull/126245 merged support in Edition 2024 for consolidating doctests into as few binaries as possible, with an opt-out of a `standalone` attribute.
I propose by default doc-tests should be run in parallel in the same process.
What's the motivation for the proposed change? For example when working on the Rust standard library, I usually run the tests for `core` and `std/alloc`, that takes incremental ~84s on my Zen 3 5900X machine. Out of that ~66s are spent running the ~4k core doc tests. That's on Linux, on Windows I expect this to be even slower, due to a higher process spawning overhead. Merged doc tests is tied to the Rust 2024 edition because it can break behavior. So there is already precedent for having a new default that is better in the common case, but may break existing projects that opt into the new edition, requiring some work. I believe and doing some code analysis via grep.app would support that the majority of doc tests can be run in parallel in the same binary. So that should be the default. For the cases that depend on some kind of singleton, log lib init, etc. they can be marked with an attribute as requiring a standalone process. That would make for two possible attributes: build isolated and run isolated. If this is deemed too complex, it's imaginable that there is only a single attribute that implies both build and run isolated.
Tagging @GuillaumeGomez | T-rustdoc,C-discussion | medium | Major |
2,467,871,445 | godot | Converting from a Panel to a PanelContainer can't be undone properly | ### Tested versions
- Reproducible in 3.5.3 and 3.6 (rc1)
### System information
Windows 10
### Issue description
Undoing a conversion from Panel to PanelContainer won't put the children back in their old positions.
Converting a Panel to a PanelContainer causes the position of a child Label (presumably all children irrespective of type) to be controlled by the PanelContainer. If you then undo the conversion, the labels won't be put back in their old positions properly, causing data loss
### Steps to reproduce
- Create a scene
- Add a root Control node
- Add a Panel child, make it bigger so that there's some room to work with
- Add a Label child to the Panel
- Put some text in the Label so you can see it
- Select and drag the Label to the bottom right of the Panel
- Convert the Panel from a Panel to a PanelContainer
- Press CTRL + Z (undo)
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Critical |
2,467,873,134 | godot | Android version freezes | ### Tested versions
4.2.2 Stable android
### System information
android 13
### Issue description
Editor freezes
https://youtu.be/9LvKUnFpb_U
### Steps to reproduce
The editor displays the window that says you don't have any projects and asks if you want to open the asset library.
Press cancel.
Click on "new" create new project.
Click "browse" to select a folder in which to create the new project.
Click on "create folder".
Tap the screen in an empty area, next to the window that asks you to enter the folder name.
The window to enter the folder name will disappear and you will not be able to do anything now. No touch will do.
See the video.
### Minimal reproduction project (MRP)
... | bug,platform:android,topic:editor | low | Minor |
2,467,887,437 | godot | Shapecast issue | ### Tested versions
Tested this on:
Godot v4.2.2.stable.mono
Godot v4.3.stable.mono
Both have this issue.
### System information
Godot v4.2.2.stable.mono - Linux Mint 21 (Vanessa) - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (nvidia; 535.161.07) - AMD Ryzen 5 1600 Six-Core Processor (12 Threads)
### Issue description
Capsule shape size needs to be a minimum of 2.7f to detect a standard 1sqm cube.
### Steps to reproduce
Start provided mrp press run. Change the provided exported floats on the caster node: shapeWidth and shapeHight. Set them to something below 2.7f and to 2.7f or above.
### Minimal reproduction project (MRP)
[Raycast isolation MRP.zip](https://github.com/user-attachments/files/16624657/Raycast.isolation.MRP.zip)
| topic:physics,needs testing,topic:3d | low | Minor |
2,467,894,906 | material-ui | [material-ui][Modal] Bring back transition component in test that fails with React 19 | While upgrading to React 19 ([PR](https://github.com/mui/material-ui/pull/42824)), a `Modal` component test started to fail ([failure example](https://app.circleci.com/pipelines/github/mui/material-ui/135645/workflows/fddf4f65-f8dc-488b-ac66-5ce83d40c941/jobs/731226?invite=true#step-106-44)). The `Modal` is not unmounted. The component works as expected when tested manually, but the combination of `Modal` + `Fade` (and probably any other transition component) makes the test fail.
https://github.com/mui/material-ui/blob/d00b50e76cc7f81f93307d151ab72a2ea6407b4a/packages/mui-material/src/Modal/Modal.test.js#L590-L612
The [issue](https://github.com/mui/material-ui/issues/12831) linked in the comment above the test points to a [PR](https://github.com/mui/material-ui/pull/16694) where a fix was implemented for a bug with the backdrop staying open. The fix consisted on:
```diff
-const [exited, setExited] = React.useState(!open);
+const [exited, setExited] = React.useState(true);
```
This fix was lost in translation once `Modal` was migrated to use `useModal` from Base UI. Last year, a user [reported the bug](https://github.com/mui/material-ui/issues/12831#issuecomment-1441222517) again.
Re-applying this fix solves the failing test, but breaks one `Drawer` component (which uses `Modal` under the hood) use case: when the `Drawer` is initially open, it won't run the exit animation the first time is closed.
Note: Using `true` as the initial state is the approach used [across the codebase](https://github.com/search?q=repo%3Amui%2Fmaterial-ui%20%22const%20%5Bexited%2C%20setExited%5D%20%3D%20React.useState(true)%3B%22&type=code), with `Modal` being the only component using `!open` as the initial state.
We decided to remove the `Fade` component as child in the test to make it pass (see https://github.com/mui/material-ui/pull/42824), but we want to investigate further and bring it back.
**Search keywords**: modal, react 19, transition, test | test,component: modal,package: material-ui | low | Critical |
2,467,898,283 | pytorch | Support for variable number of dimensions in functorch.dim dims() and autonaming induced inconsistencies | ### 🚀 The feature, motivation and pitch
I propose to expand the api of functorch.dim.dims() with the possibility to pass a list of dimension names as an argument.
### Describe current behavior
As of now, the syntax (dim_1, dim_2, ... dim_n) = dims(n) requires the user to pre-specify in advance the number of dimensions and their names. A call of the type dimensions = dims(n) results in a tuple dimensions = (d1, d2, ... dn) with the names d1, ... dn fixed internally withput any user choice. The code snippet illustrates the issue. cc torchdim contributors: @zdevito @ae99
```
import torch
from functorch.dim import dims
# current behavior leads to confusing output.
batch_dims = dims(sizes = [10,5])
event_dims = dims(sizes = [3,2])
print(batch_dims) # produces (d0, d1)
print(event_dims) # produces (d0, d1)
print([dim.size for dim in batch_dims]) # produces [10,5]
print([dim.size for dim in event_dims]) # produces [3,2]
# This is confusing as there exist now two dimensions d0 with different properties
# The issue becomes even clearer, when using the dimensions to define tensors
# with first class dims:
some_dims = dims(2)
unrelated_dims = dims(2)
A = torch.randn([2,2])
A_fc_1 = A[some_dims]
A_fc_2 = A[unrelated_dims]
print(A_fc_1) # produces tensor([[ ...]]) with dims=(d0,d1), sizes=(2,2)
print(A_fc_2) # produces tensor([[ ...]]) with dims=(d0,d1), sizes=(2,2)
# this makes it look like there the same dimensions are indexing A_fc_1 and A_fc_2
# even though this is not the case and i am unsure about the behavior of e.g. the
# einsum-style ops.
```
It seems as if there is currently no possibility to use the dims() function to create a variable number of dimensions with userspecified names.
### Describe desired behavior
```
# The following is envisioned after resolving the issue
batch_dims = dims(sizes = [10,5], names = ['bd_1', 'bd_2'])
event_dims = dims(sizes = [3,2], names = ['ed_1', 'ed_2'])
print(batch_dims) # produces (bd_1, bd_2)
print(event_dims) # produces (ed_1, ed_2)
# This would allow for a construction that produces a variable number of batch_dims
# depending on user input.
bd_sizes = [2,3,4]
bd_names = ['bd_1', 'bd_2', 'bd_3']
batch_dims = dims(sizes = bd_sizes, names = bd_names)
# Where in the above, it is to be understood that bd_sizes and bd_names are reflecting
# the input of some user that specifies the dimensionality of a problem (e.g. if
# some offset should be applied to some 1d, 2d, 3d, tensor).
```
### Describe benefit of proposed changes
The current behavior is undesirable when the number of dimensions is not known beforehand as there is no method for naming individual dimensions when instantiating a variable number of dimensions at once. With the proposed api changes, it would be possible to instantiate and distinguish a variable number of dimensions. This would make it possible to create multiple dimensions at once without hardcoding their name and multiplicity.
### Alternatives
Named Tensors does not seem to be maintained anymore. Alternatives to creating variable-length tuples of dimensions with named entries do not seem possible with the current API for the functorch.dim.dims() function (apart from iteratively executing some string of code).
### Additional context
I work on probabilistic programming and would like to create uniquely identifiable or at least named batch/event dimensions by passing lists of names and dimsizes to dims().
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 | triaged,module: functorch,module: first class dims | low | Minor |
2,467,947,255 | godot | Godot does not recognise versioning of editor_layout.cfg (i.e. running Godot v3 alongside v4) | ### Tested versions
v4.2.2.stable.official [15073afe3]
### System information
Godot v4.2.2.stable - Windows 10.0.19044 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 31.0.15.2756) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description
Godot 3 and Godot 4 use the same editor_layouts.cfg file. The problem is that there Godot 4 types being used that aren't supported in Godot 3 causing an error message.
Perhaps Godot's use of user application data (e.g. `appdata/roaming/godot`) should be versioned, e.g. `appdata/roaming/godot4`) going forward?
### Steps to reproduce
Running Godot 4, and adding a docked ui via EditorPlugin results in my `C:\Users\lernie\AppData\Roaming\Godot\editor_layouts.cfg` to saved to contain this:
```
[lernie-1]
dock_filesystem_split=0
dock_filesystem_display_mode=0
dock_filesystem_file_list_display_mode=1
dock_split_3=-102
dock_split_4=125
dock_hsplit_1=0
dock_hsplit_2=470
dock_hsplit_3=-963
dock_hsplit_4=341
dock_1_selected_tab_idx=0
dock_2_selected_tab_idx=0
dock_3_selected_tab_idx=0
dock_4_selected_tab_idx=0
dock_5="Scene,History"
dock_5_selected_tab_idx=0
dock_6="FileSystem,Node"
dock_6_selected_tab_idx=1
dock_7="Inspector"
dock_7_selected_tab_idx=0
dock_8="Import,Control"
dock_8_selected_tab_idx=1
dock_floating={}
dock_filesystem_file_sort=0
dock_filesystem_selected_paths=PackedStringArray("res://addons/main_screen_lang/main_panel.tscn")
dock_filesystem_uncollapsed_paths=PackedStringArray("res://", "res://scenes/", "res://addons/", "res://addons/main_screen_lang/")
```
When I open Godot 3, it complains:
```
ERROR: ConfigFile parse error at C:/Users/lernie/AppData/Roaming/Godot/editor_layouts.cfg:25: Unexpected identifier: 'PackedStringArray'..
```
### Minimal reproduction project (MRP)
n/a
(The problem can be reproduced by including a PackedStringArray type as value to `editor_layouts.cfg`. Godot 3 will raise the error.) | bug,topic:editor | low | Critical |
2,467,956,780 | kubernetes | static pod stuck in "Waiting for volumes to unmount for pod" for a longtime on single node by chance | ### What happened?
1. when update static pod(kube-apiserver)yaml, the static pod maybe stuck for 20 minutes to 2 hours, some logs show as follow
```
Aug 15 20:18:20 node1 hyperkube[1974327]: I0815 20:18:20.604518 1974327 actual_state_of_world.go:973] "Pod mounted volumes" uniquePodName=9d45620a-ae62-4ee3-bdb7-139998904a99 mountedVolume=[{MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-positions InnerVolumeSpecName:positions OuterVolumeSpecName:positions PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc006092940 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7a70 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/projected/9d45620a-ae62-4ee3-bdb7-139998904a99-kube-api-access-wd5vj InnerVolumeSpecName:kube-api-access-wd5vj OuterVolumeSpecName:kube-api-access-wd5vj PluginName:kubernetes.io/projected PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005345100 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b48 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-test-apiserver InnerVolumeSpecName:var-log-test-apiserver OuterVolumeSpecName:var-log-test-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc003817600 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ab8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-kube-apiserver InnerVolumeSpecName:var-log-kube-apiserver OuterVolumeSpecName:var-log-kube-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005f320c0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ae8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-oauth-apiserver InnerVolumeSpecName:var-log-oauth-apiserver OuterVolumeSpecName:var-log-oauth-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc00ae8edc0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b18 DeviceMountPath: SELinuxMountContext:}}]
Aug 15 20:18:29 node1 hyperkube[1974327]: I0815 20:18:29.606947 1974327 file.go:202] "Reading config file" path="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:29 node1 hyperkube[1974327]: I0815 20:18:29.607559 1974327 common.go:69] "Generated UID" pod="test-kube-apiserver/kube-apiserver" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:29 node1 hyperkube[1974327]: I0815 20:18:29.607570 1974327 common.go:73] "Generated pod name" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:29 node1 hyperkube[1974327]: I0815 20:18:29.607581 1974327 common.go:78] "Set namespace for pod" pod="test-kube-apiserver/kube-apiserver-node1" source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.381574 1974327 volume_manager.go:469] "Some volumes still mounted for pod" pod="test-kube-apiserver/kube-apiserver-node1" mountedVolumes=[audit-dir cert-dir resource-dir]
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.381589 1974327 kubelet.go:1976] "SyncTerminatedPod exit" pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7
Aug 15 20:18:31 node1 hyperkube[1974327]: E0815 20:18:31.381600 1974327 pod_workers.go:1256] "Error syncing pod, skipping" err="mounted volumes=[audit-dir cert-dir resource-dir]: timed out waiting for the condition" pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.381628 1974327 pod_workers.go:1293] "Processing pod event done" pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7 updateType="terminated"
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.381641 1974327 pod_workers.go:1188] "Processing pod event" pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7 updateType="terminated"
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.508902 1974327 kubelet.go:2261] "SyncLoop (SYNC) pods" total=5 pods=[test-kube-apiserver/kube-apiserver-node1 test-doko/doko-ingress-proxy-tp5d5 test-cnv/hci-compute-fileserver-5874bc85df-gpph2 test-vnet-operator/vnet-operator-controller-manager-6cdcbdb49-bcrkq test-cluster-node-tuning-operator/cluster-node-tuning-operator-5bbb5fd999-r9nkg]
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.508930 1974327 pod_workers.go:931] "Notifying pod of pending update" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 workType="sync"
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.509044 1974327 pod_workers.go:1138] "Pod cannot start yet" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.517993 1974327 kubelet.go:1965] "SyncTerminatedPod enter" pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.518002 1974327 kubelet_pods.go:1605] "Generating pod status" pod="test-kube-apiserver/kube-apiserver-node1"
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.518048 1974327 kubelet_pods.go:1615] "Got phase for pod" pod="test-kube-apiserver/kube-apiserver-node1" oldPhase=Pending phase=Pending
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.518100 1974327 status_manager.go:532] "updateStatusInternal" version=1 pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7 containers="(kube-apiserver state=waiting previous=<none>) (kube-apiserver-cert-regeneration-controller state=waiting previous=<none>) (kube-apiserver-cert-syncer state=waiting previous=<none>) (kube-apiserver-check-endpoints state=waiting previous=<none>) (kube-apiserver-insecure-readyz state=waiting previous=<none>) (setup state=waiting previous=<none>)"
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.518170 1974327 status_manager.go:552] "Status Manager: adding pod with new status to podStatusChannel" pod="test-kube-apiserver/kube-apiserver-node1" podUID=bb8823c6e574ba7c1215633f4bf0f7d7 statusVersion=1 status={Phase:Pending Conditions:[{Type:Initialized Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 20:18:31 +0800 CST Reason:ContainersNotInitialized Message:containers with incomplete status: [setup]} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 20:18:31 +0800 CST Reason:ContainersNotReady Message:containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 20:18:31 +0800 CST Reason:ContainersNotReady Message:containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 19:57:00 +0800 CST Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.253.166.72 PodIP:10.253.166.72 PodIPs:[{IP:10.253.166.72}] StartTime:2024-08-15 19:57:00 +0800 CST InitContainerStatuses:[{Name:setup State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:image.test.io/test-ceaedge/ceaedge@sha256:b161fe4e21adfa95e7620c778536dd656a9437482d89fa44941f05c0e101fe28 ImageID: ContainerID: Started:<nil>}] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:image.test.io/test-ceaedge/ceaedge@sha256:b161fe4e21adfa95e7620c778536dd656a9437482d89fa44941f05c0e101fe28 ImageID: ContainerID: Started:0xc011eba16d} {Name:kube-apiserver-cert-regeneration-controller State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:image.test.io/test-ceaedge/ceaedge@sha256:382da2faa65c152f4d930f42f6d729fe219d337b243584f8ae13788829730024 ImageID: ContainerID: Started:0xc011eba16e} {Name:kube-apiserver-cert-syncer State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:image.test.io/test-ceaedge/ceaedge@sha256:382da2faa65c152f4d930f42f6d729fe219d337b243584f8ae13788829730024 ImageID: ContainerID: Started:0xc011eba16f} {Name:kube-apiserver-check-endpoints State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:image.test.io/test-ceaedge/ceaedge@sha256:382da2faa65c152f4d930f42f6d729fe219d337b243584f8ae13788829730024 ImageID: ContainerID: Started:0xc011eba1a0} {Name:kube-apiserver-insecure-readyz State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:image.test.io/test-ceaedge/ceaedge@sha256:382da2faa65c152f4d930f42f6d729fe219d337b243584f8ae13788829730024 ImageID: ContainerID: Started:0xc011eba1a1}] QOSClass:Burstable EphemeralContainerStatuses:[]}
Aug 15 20:18:31 node1 hyperkube[1974327]: I0815 20:18:31.518183 1974327 volume_manager.go:448] "Waiting for volumes to unmount for pod" pod="test-kube-apiserver/kube-apiserver-node1"
Aug 15 20:18:35 node1 hyperkube[1974327]: I0815 20:18:35.509137 1974327 actual_state_of_world.go:973] "Pod mounted volumes" uniquePodName=9d45620a-ae62-4ee3-bdb7-139998904a99 mountedVolume=[{MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-positions InnerVolumeSpecName:positions OuterVolumeSpecName:positions PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc006092940 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7a70 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/projected/9d45620a-ae62-4ee3-bdb7-139998904a99-kube-api-access-wd5vj InnerVolumeSpecName:kube-api-access-wd5vj OuterVolumeSpecName:kube-api-access-wd5vj PluginName:kubernetes.io/projected PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005345100 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b48 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-test-apiserver InnerVolumeSpecName:var-log-test-apiserver OuterVolumeSpecName:var-log-test-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc003817600 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ab8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-kube-apiserver InnerVolumeSpecName:var-log-kube-apiserver OuterVolumeSpecName:var-log-kube-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005f320c0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ae8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-oauth-apiserver InnerVolumeSpecName:var-log-oauth-apiserver OuterVolumeSpecName:var-log-oauth-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc00ae8edc0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b18 DeviceMountPath: SELinuxMountContext:}}]
Aug 15 20:18:35 node1 hyperkube[1974327]: I0815 20:18:35.509810 1974327 actual_state_of_world.go:973] "Pod mounted volumes" uniquePodName=9d45620a-ae62-4ee3-bdb7-139998904a99 mountedVolume=[{MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-test-apiserver InnerVolumeSpecName:var-log-test-apiserver OuterVolumeSpecName:var-log-test-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc003817600 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ab8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-kube-apiserver InnerVolumeSpecName:var-log-kube-apiserver OuterVolumeSpecName:var-log-kube-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005f320c0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ae8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-oauth-apiserver InnerVolumeSpecName:var-log-oauth-apiserver OuterVolumeSpecName:var-log-oauth-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc00ae8edc0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b18 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-positions InnerVolumeSpecName:positions OuterVolumeSpecName:positions PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc006092940 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7a70 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/projected/9d45620a-ae62-4ee3-bdb7-139998904a99-kube-api-access-wd5vj InnerVolumeSpecName:kube-api-access-wd5vj OuterVolumeSpecName:kube-api-access-wd5vj PluginName:kubernetes.io/projected PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005345100 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b48 DeviceMountPath: SELinuxMountContext:}}]
Aug 15 20:18:46 node1 hyperkube[1974327]: I0815 20:18:46.508306 1974327 kubelet.go:2261] "SyncLoop (SYNC) pods" total=5 pods=[test-cnv/vic-image-registry-5c64975fdd-jww6l test-cnv/yum-repo-87d95477-4jfn7 test-kube-apiserver/kube-apiserver-node1 default/grafana-6598f98dd-hkzmf test-cluster-node-tuning-operator/cluster-node-tuning-operator-5bbb5fd999-r9nkg]
Aug 15 20:18:46 node1 hyperkube[1974327]: I0815 20:18:46.508354 1974327 pod_workers.go:931] "Notifying pod of pending update" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 workType="sync"
Aug 15 20:18:46 node1 hyperkube[1974327]: I0815 20:18:46.508440 1974327 pod_workers.go:1138] "Pod cannot start yet" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1
Aug 15 20:18:49 node1 hyperkube[1974327]: I0815 20:18:49.607098 1974327 file.go:202] "Reading config file" path="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:49 node1 hyperkube[1974327]: I0815 20:18:49.607697 1974327 common.go:69] "Generated UID" pod="test-kube-apiserver/kube-apiserver" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:49 node1 hyperkube[1974327]: I0815 20:18:49.607708 1974327 common.go:73] "Generated pod name" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:18:49 node1 hyperkube[1974327]: I0815 20:18:49.607717 1974327 common.go:78] "Set namespace for pod" pod="test-kube-apiserver/kube-apiserver-node1" source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:01 node1 hyperkube[1974327]: I0815 20:19:01.508509 1974327 kubelet.go:2261] "SyncLoop (SYNC) pods" total=7 pods=[test-controller-manager/vm-scheduler-848d87568f-xtdw7 test-kube-controller-manager/kube-controller-manager-node1 test-doko/doko-agent-6hkd8 default/hp-volume-kpbnp test-kube-apiserver/kube-apiserver-node1 test-logging/logging-operator-548564d9d9-rcncd test-monitoring/alertmanager-main-0]
Aug 15 20:19:01 node1 hyperkube[1974327]: I0815 20:19:01.508578 1974327 pod_workers.go:931] "Notifying pod of pending update" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 workType="sync"
Aug 15 20:19:01 node1 hyperkube[1974327]: I0815 20:19:01.508643 1974327 pod_workers.go:1138] "Pod cannot start yet" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1
Aug 15 20:19:09 node1 hyperkube[1974327]: I0815 20:19:09.607309 1974327 file.go:202] "Reading config file" path="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:09 node1 hyperkube[1974327]: I0815 20:19:09.607920 1974327 common.go:69] "Generated UID" pod="test-kube-apiserver/kube-apiserver" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:09 node1 hyperkube[1974327]: I0815 20:19:09.607932 1974327 common.go:73] "Generated pod name" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:09 node1 hyperkube[1974327]: I0815 20:19:09.607944 1974327 common.go:78] "Set namespace for pod" pod="test-kube-apiserver/kube-apiserver-node1" source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:12 node1 hyperkube[1974327]: I0815 20:19:12.508790 1974327 kubelet.go:2261] "SyncLoop (SYNC) pods" total=2 pods=[test-kube-apiserver/kube-apiserver-node1 test-vnet-operator/vnet-operator-controller-manager-6cdcbdb49-bcrkq]
Aug 15 20:19:12 node1 hyperkube[1974327]: I0815 20:19:12.508808 1974327 pod_workers.go:931] "Notifying pod of pending update" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 workType="sync"
Aug 15 20:19:12 node1 hyperkube[1974327]: I0815 20:19:12.508896 1974327 pod_workers.go:1138] "Pod cannot start yet" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1
Aug 15 20:19:27 node1 hyperkube[1974327]: I0815 20:19:27.508926 1974327 kubelet.go:2261] "SyncLoop (SYNC) pods" total=6 pods=[test-cnv/hci-compute-fileserver-5874bc85df-gpph2 test-vnet-operator/vnet-operator-controller-manager-6cdcbdb49-bcrkq test-base-image-registry-operator/base-image-registry-69ff9bd484-64527 test-node-label-operator/node-label-operator-6cd767b464-q4dz6 test-kube-controller-manager/kube-controller-manager-node1 test-kube-apiserver/kube-apiserver-node1]
Aug 15 20:19:27 node1 hyperkube[1974327]: I0815 20:19:27.509009 1974327 pod_workers.go:931] "Notifying pod of pending update" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 workType="sync"
Aug 15 20:19:27 node1 hyperkube[1974327]: I0815 20:19:27.509099 1974327 pod_workers.go:1138] "Pod cannot start yet" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1
Aug 15 20:19:29 node1 hyperkube[1974327]: I0815 20:19:29.607409 1974327 file.go:202] "Reading config file" path="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:29 node1 hyperkube[1974327]: I0815 20:19:29.607998 1974327 common.go:69] "Generated UID" pod="test-kube-apiserver/kube-apiserver" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:29 node1 hyperkube[1974327]: I0815 20:19:29.608009 1974327 common.go:73] "Generated pod name" pod="test-kube-apiserver/kube-apiserver-node1" podUID=d6c287e864be7616ad8256abb9e925b1 source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:29 node1 hyperkube[1974327]: I0815 20:19:29.608017 1974327 common.go:78] "Set namespace for pod" pod="test-kube-apiserver/kube-apiserver-node1" source="/etc/kubernetes/manifests/kube-apiserver-pod.yaml"
Aug 15 20:19:40 node1 hyperkube[1974327]: I0815 20:19:40.511401 1974327 actual_state_of_world.go:973] "Pod mounted volumes" uniquePodName=9d45620a-ae62-4ee3-bdb7-139998904a99 mountedVolume=[{MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-test-apiserver InnerVolumeSpecName:var-log-test-apiserver OuterVolumeSpecName:var-log-test-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc003817600 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ab8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-kube-apiserver InnerVolumeSpecName:var-log-kube-apiserver OuterVolumeSpecName:var-log-kube-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005f320c0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ae8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-oauth-apiserver InnerVolumeSpecName:var-log-oauth-apiserver OuterVolumeSpecName:var-log-oauth-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc00ae8edc0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b18 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-positions InnerVolumeSpecName:positions OuterVolumeSpecName:positions PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc006092940 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7a70 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/projected/9d45620a-ae62-4ee3-bdb7-139998904a99-kube-api-access-wd5vj InnerVolumeSpecName:kube-api-access-wd5vj OuterVolumeSpecName:kube-api-access-wd5vj PluginName:kubernetes.io/projected PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005345100 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b48 DeviceMountPath: SELinuxMountContext:}}]
Aug 15 20:19:40 node1 hyperkube[1974327]: I0815 20:19:40.512098 1974327 actual_state_of_world.go:973] "Pod mounted volumes" uniquePodName=9d45620a-ae62-4ee3-bdb7-139998904a99 mountedVolume=[{MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-test-apiserver InnerVolumeSpecName:var-log-test-apiserver OuterVolumeSpecName:var-log-test-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc003817600 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ab8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-kube-apiserver InnerVolumeSpecName:var-log-kube-apiserver OuterVolumeSpecName:var-log-kube-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005f320c0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7ae8 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-var-log-oauth-apiserver InnerVolumeSpecName:var-log-oauth-apiserver OuterVolumeSpecName:var-log-oauth-apiserver PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc00ae8edc0 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b18 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/host-path/9d45620a-ae62-4ee3-bdb7-139998904a99-positions InnerVolumeSpecName:positions OuterVolumeSpecName:positions PluginName:kubernetes.io/host-path PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc006092940 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7a70 DeviceMountPath: SELinuxMountContext:}} {MountedVolume:{PodName:9d45620a-ae62-4ee3-bdb7-139998904a99 VolumeName:kubernetes.io/projected/9d45620a-ae62-4ee3-bdb7-139998904a99-kube-api-access-wd5vj InnerVolumeSpecName:kube-api-access-wd5vj OuterVolumeSpecName:kube-api-access-wd5vj PluginName:kubernetes.io/projected PodUID:9d45620a-ae62-4ee3-bdb7-139998904a99 Mounter:0xc005345100 BlockVolumeMapper:<nil> VolumeGidValue: VolumeSpec:0xc0032f7b48 DeviceMountPath: SELinuxMountContext:}}]
```
2. restart kubelet will start the static pod quickly
3. related PR:
- https://github.com/kubernetes/kubernetes/pull/113145
- https://github.com/kubernetes/kubernetes/issues/117745
- https://github.com/kubernetes/kubernetes/pull/117751
- https://github.com/kubernetes/kubernetes/pull/116995
### What did you expect to happen?
The static pods will start soon when edit the static pod yaml
### How can we reproduce it (as minimally and precisely as possible)?
this happend by chance , now can be reproduced in K8s 1.25 and 1.29 on single node
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
k8s 1.25
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,sig/node,triage/accepted | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.