id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,565,201,435 | go | net: TestUDPIPVersionReadMsg failures | ```
#!watchflakes
default <- pkg == "net" && test == "TestUDPIPVersionReadMsg"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8735065407922958673)):
=== RUN TestUDPIPVersionReadMsg
udpsock_test.go:636: write udp4 127.0.0.1:53084->127.0.0.1:53084: sendto: no buffer space available
--- FAIL: TestUDPIPVersionReadMsg (0.00s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,565,203,263 | pytorch | [torch2.4][Distributed Checkpoint] new flatten logic is error-prone | ### ๐ Describe the bug
Hi team,
the new [state dict flatten logic](https://github.com/pytorch/pytorch/commit/6f1e3a6bf73327a351dc8a8c08635bd727b3134f) introduced in torch 2.4 can caused undesired key mismatch.
let's say before training any batch, state_dict() has an item like this:
`state_dict["my_key"] = []`
after train a few batches, i stored some information, and it becomes:
`state_dict["my_key"] = [{"key_1":1}, {"key_2":2}]`
with distributed checkpointing, when saving:
the state dict key will be flattened as:
`my_key.0.key_1`
`my_key.1.key_2`
but when loading before training any batch, since state_dict["my_key"] = [], after flatten, the key is
`my_key`
so it will cause mismatch here https://github.com/pytorch/pytorch/blob/v2.4.0/torch/distributed/checkpoint/default_planner.py#L316.
But what user want in above case is just set `[{"key_1":1}, {"key_2":2}]` as a whole to state_dict["my_key"]
So, this basically recursively requires:
1. if the state dict **value** is a map, its key must match between the save and load state dict
2. if the state dict **value** is a list, its number of element must match
### Versions
NA
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn | oncall: distributed,module: distributed_checkpoint,oncall: distributed checkpointing | low | Critical |
2,565,210,546 | PowerToys | Workspaces - close existing apps | ### Description of the new feature / enhancement
The function is superb! Can open multiple applications without any issue. But, it would be great if can add CLOSE FUNCTION for all opened applications that had opened using workspaces.
### Scenario when this would be used?
To simplify closing all app that had open previously.
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,565,290,957 | tauri | [feat] Remove duplicated packages and implement things ourselves | ### Describe the problem
Right now, a Tauri project contains +400 to +700 packages, not counting the frontend packages which bring this number even further up. See the dependency graph below of simple `"Hello, World!"` Tauri 2 project:

Some of Tauri's dependencies, like [dirs](https://docs.rs/dirs/latest/dirs/), provide simple things that we could just implement ourselves.
The amount of packages Tauri relies on makes it vulnerable to possible supply chain attacks, and now that we have a stable release for Tauri 2, I think it's important to address that.
### Describe the solution you'd like
* Remove duplicated packages;
* Implement things ourselves;
* Consolidate dependencies.
### Alternatives considered
_No response_
### Additional context
https://www.memorysafety.org/blog/reducing-dependencies-in-sudo/ | type: feature request,priority: 3 low | low | Minor |
2,565,340,019 | tensorflow | Request to bring back GPU compatibility checks for TFLite `model_analyzer` | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf_nightly == 2.19.0.dev20241003
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
While using the nightly version I discover that the GPU compatibility checks are deprecated for the [model_analyzer](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/python/analyzer_wrapper/model_analyzer.cc#L443) tool in TF lite, following [this PR](https://github.com/tensorflow/tensorflow/pull/74830).
Currently the code for checking GPU compatibility is deprecated, but the output still prints "Your model is compatible with GPU delegate" (because there are essentially no checks). IMO this is actually confusing. I would suggest to change the output print to "Skipping GPU compatibility as it is deprecated", or just deprecate the `gpu_compatibility` boolean flag.
The previous logic is handy enough for my use case to expose non-compatible operators beforehand, and apply the tunings required manually on the `.tflite` graph, so curious why it is deprecated and what are the plans moving forward. Thanks!
### Standalone code to reproduce the issue
```shell
Attached PR regarding deprecation: https://github.com/tensorflow/tensorflow/pull/74830
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:feature,comp:lite | low | Critical |
2,565,388,356 | godot | godot4.4 dev3 HttpClient ,ERROR: BUG: Unreferenced static string to 0: servers. | ### Tested versions
godot4.4 dev3
### System information
Android
### Issue description
C# CODE
```csharp
public override async void _Ready()
{
var h1 = new System.Net.Http.HttpClient();
var t1 = await h1.GetAsync("https://www.xxxxxx.com/l01_01_md5.txt");
}
```
```
2024-10-04 12:05:12.155 561-949 BufferQueueProducer surfaceflinger I [SurfaceView - com.example.demo1/com.godot.game.GodotApp#0](this:0xb400007e97625128,id:35,api:1,p:6970,c:561) queueBuffer: fps=0.69 dur=1451.93 max=1451.93 min=1451.93
2024-10-04 12:05:12.431 6970-7075 godot com.example.demo1 E ERROR: BUG: Unreferenced static string to 0: servers
2024-10-04 12:05:12.431 6970-7075 godot com.example.demo1 E at: unref (core/string/string_name.cpp:142)
2024-10-04 12:05:12.432 6970-7075 godot com.example.demo1 E ERROR: BUG: Unreferenced static string to 0: ShaderCompilation
2024-10-04 12:05:12.432 6970-7075 godot com.example.demo1 E at: unref (core/string/string_name.cpp:142)
2024-10-04 12:05:12.439 6970-7037 libc com.example.demo1 A FORTIFY: pthread_mutex_lock called on a destroyed mutex (0x766f930f80)
2024-10-04 12:05:12.439 6970-7075 godot com.example.demo1 E ERROR: BUG: Unreferenced static string to 0: current_animation_changed
2024-10-04 12:05:12.439 6970-7075 godot com.example.demo1 E at: unref (core/string/string_name.cpp:142)
2024-10-04 12:05:12.540 906-1049 InputDispatcher system_server W channel 'a250195 com.example.demo1/com.godot.game.GodotApp (server)' ~ Consumer closed input channel or an error occurred. events=0x9
2024-10-04 12:05:12.540 906-1049 InputDispatcher system_server E channel 'a250195 com.example.demo1/com.godot.game.GodotApp (server)' ~ Channel is unrecoverably broken and will be disposed!
2024-10-04 12:05:12.541 561-949 BufferQueueProducer surfaceflinger I [com.example.demo1/com.godot.game.GodotApp#0](id:23100000022,api:1,p:6970,c:561) disconnect(): api=1
2024-10-04 12:05:12.541 561-949 BufferQueueProducer surfaceflinger I [SurfaceView - com.example.demo1/com.godot.game.GodotApp#0](id:23100000023,api:1,p:6970,c:561) disconnect(): api=1
2024-10-04 12:05:12.543 906-2666 ActivityManager system_server I Process com.example.demo1 (pid 6970) has died: fg TOP
2024-10-04 12:05:12.543 906-2747 WindowManager system_server I WIN DEATH: Window{a250195 u0 com.example.demo1/com.godot.game.GodotApp}
```
### Steps to reproduce
```csharp
public override async void _Ready()
{
var h1 = new System.Net.Http.HttpClient();
var t1 = await h1.GetAsync("https://www.xxxxxx.com/l01_01_md5.txt");
}
```
### Minimal reproduction project (MRP)
```csharp
var h1 = new System.Net.Http.HttpClient();
var t1 = await h1.GetAsync("https://www.xxxxxx.com/l01_01_md5.txt");
``` | bug,platform:android,needs testing,topic:network,topic:dotnet,crash | low | Critical |
2,565,590,901 | angular | Dynamically-created component not removed when zoneless is combined with animations | ### Is this a regression?
- [x] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
no very sure, but around 18.2.x
### Description

The overlay content has not been removed.
### Reproduction
StackBlitz link:
Steps to reproduce:
1. git clone [https://github.com/keatkeat87/ng-mat-overlay-detach-issue.git](https://github.com/keatkeat87/ng-mat-overlay-detach-issue.git)
2. ng serve
3. click open and then close.

if use ZoneChangeDetection, then no problem.
```
export const appConfig: ApplicationConfig = {
providers: [provideZoneChangeDetection({ eventCoalescing: true }), provideAnimations()] // work
// providers: [provideExperimentalZonelessChangeDetection(), provideAnimations()] // no work
};
```
only ZonelessChangeDetection have the problem.
if we manual call detectChanges(), then it work.
```
export class ModalContainerComponent {
private readonly overlayRef = inject(OverlayRef);
private cdk = inject(ChangeDetectorRef);
close() {
this.overlayRef.detach();
this.cdk.detectChanges(); // manual detectChanges will work
// this.cdk.detectChanges(); // without detectChanges no work
}
}
```
### Expected Behavior
The overlay content should be removed.
### Actual Behavior
The overlay content has not been removed.
### Environment
Angular CLI: 18.2.7
Node: 20.11.1
Package Manager: yarn 1.22.19
OS: win32 x64
Angular: 18.2.7
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.7
@angular-devkit/build-angular 18.2.7
@angular-devkit/core 18.2.7
@angular-devkit/schematics 18.2.7
@angular/cdk 18.2.6
@angular/material 18.2.6
@schematics/angular 18.2.7
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10 | area: animations,hotlist: components team,area: core,P2,bug,core: zoneless | low | Critical |
2,565,689,336 | material-ui | [docs] Box `component` removal migration help | ### Related page
https://mui.com/material-ui/migration/upgrade-to-v6/#breaking-changes-affecting-types
### Kind of issue
Unclear explanations
### Issue description
What to do if I have:
```
const StyledBox = styled(Box)<{ $customProp: string }>`
color: white;
`;
```
?
### Context
Example of a real 2-y/o piece of code that I'm trying to migrate from v5 to v6:
```tsx
const FeatureDot = styled(Box)<{ $direction: "row" | "row-reverse" }>`
position: absolute;
${({ theme, $direction }) =>
$direction === "row"
? css`
left: ${theme.spacing(dotHorizontalOffset.xs)};
${theme.breakpoints.up("sm")} {
left: ${theme.spacing(dotHorizontalOffset.sm)};
}
${theme.breakpoints.up("lg")} {
left: ${theme.spacing(dotHorizontalOffset.lg)};
}
`
: css`
right: ${theme.spacing(dotHorizontalOffset.xs)};
${theme.breakpoints.up("sm")} {
right: ${theme.spacing(dotHorizontalOffset.sm)};
}
${theme.breakpoints.up("lg")} {
right: ${theme.spacing(dotHorizontalOffset.lg)};
}
`};
top: calc(50% - ${({ theme }) => theme.spacing(dotSize / 2)});
height: ${({ theme }) => theme.spacing(dotSize)};
width: ${({ theme }) => theme.spacing(dotSize)};
border-radius: ${({ theme }) => theme.spacing(99)};
transition: ${({ theme }) =>
theme.transitions.create("background-color", {
easing: theme.transitions.easing.easeOut,
duration: theme.transitions.duration.shortest
})};
`;
```
That is later used as
```tsx
<FeatureDot
bgcolor={dotReached ? progressColor : "grey.500"}
$direction={direction}
component="span"
ref={dotRef}
/>
```
So I have the combination of
a) custom prop
b) system prop (will be sx)
c) component.
If I use `styled("span")`, then I loose the system prop (or sx).
If i use `styled(Box) as typeof Box` then I loose the type for custom prop.
What do I do?
**Search keywords**: Box component styled custom props | docs,component: Box,support: docs-feedback | low | Minor |
2,565,736,764 | ui | [bug]: Deployment build error for input and use-toast | ### Describe the bug
Works fine on dev environment but failed to build in vercel
I tried deploying on Vercel using Next.js 14 and shadcn/ui, but I encountered an error during the build. Should I disable these two rules?

### Affected component/components
input and toast
### How to reproduce
pnpm run build
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11, pnpm
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,565,777,710 | PowerToys | Enhance a sentence with AI | ### Description of the new feature / enhancement
I have a suggestion : Enhance the writing of a sentence with AI in any writing box (Word, Browser, Excel, PowerPoint ...). I suggest the combination : Control + C +C
### Scenario when this would be used?
Enhance the writing in any writing program/ app
### Supporting information
_No response_ | Idea-New PowerToy | low | Minor |
2,565,791,492 | godot | The speed and loop properties did not take effect when converting the sprite image to SpriteFrames object and saving it | ### Tested versions
Godot_v4.3-stable_win64
### System information
window10
### Issue description
The speed and loop properties did not take effect when converting the sprite image to SpriteFrames object and saving it
### Steps to reproduce
```gdscript
func generate_animations(model_dictionary, save_path, animation_name):
var groupDic: Dictionary = {}
for filename in model_dictionary:
var sp = filename.split("_")
groupDic[sp[1]] = 1
var sprite_frames = SpriteFrames.new()
sprite_frames.remove_animation("default")
for direction in groupDic.keys():
var animation_direction = animation_name + "_" + direction
var texture_arr = file_helper.get_atlas_texture_by_prefix(animation_direction, model_dictionary)
sprite_frames.add_animation(animation_direction)
if texture_arr.size() > 0:
sprite_frames.set_animation_speed(animation_direction, texture_arr.size()/3 )
else:
print("No frames found for animation direction: " + animation_direction)
var loop:bool = false
if animation_name == "stand" || animation_name == "walk":
loop = true
sprite_frames.set_animation_loop(animation_direction,loop)
for texture in texture_arr:
sprite_frames.add_frame(animation_direction, texture)
# save
var result = ResourceSaver.save(sprite_frames, save_path + "/" + animation_name + ".tres", ResourceSaver.FLAG_REPLACE_SUBRESOURCE_PATHS)
print(str(result))
# debug
for anim_name in sprite_frames.get_animation_names():
print("Animation: " + anim_name + ", Speed: " + str(sprite_frames.get_animation_speed(anim_name)) + ", Loop: " + str(sprite_frames.get_animation_loop(anim_name)))
```
### Minimal reproduction project (MRP)
```gdscript
func generate_animations(model_dictionary, save_path, animation_name):
var groupDic: Dictionary = {}
for filename in model_dictionary:
var sp = filename.split("_")
groupDic[sp[1]] = 1
var sprite_frames = SpriteFrames.new()
sprite_frames.remove_animation("default")
for direction in groupDic.keys():
var animation_direction = animation_name + "_" + direction
var texture_arr = file_helper.get_atlas_texture_by_prefix(animation_direction, model_dictionary)
sprite_frames.add_animation(animation_direction)
if texture_arr.size() > 0:
sprite_frames.set_animation_speed(animation_direction, texture_arr.size()/3 )
else:
print("No frames found for animation direction: " + animation_direction)
var loop:bool = false
if animation_name == "stand" || animation_name == "walk":
loop = true
sprite_frames.set_animation_loop(animation_direction,loop)
for texture in texture_arr:
sprite_frames.add_frame(animation_direction, texture)
# save
var result = ResourceSaver.save(sprite_frames, save_path + "/" + animation_name + ".tres", ResourceSaver.FLAG_REPLACE_SUBRESOURCE_PATHS)
print(str(result))
# debug
for anim_name in sprite_frames.get_animation_names():
print("Animation: " + anim_name + ", Speed: " + str(sprite_frames.get_animation_speed(anim_name)) + ", Loop: " + str(sprite_frames.get_animation_loop(anim_name)))
```
_Bugsquad edit:_ Fix codeblock formatting. | bug,needs testing,topic:animation | low | Critical |
2,565,820,022 | ollama | mixtral:8x22b model does not work with system prompt only | ### What is the issue?
The `mixtral:8x22b-instruct` model does not work correctly when only the system prompt is provided. In such cases, an empty prompt is sent, leading to irrelevant output.
This behavior may be related to the internal handling of prompts or recent changes made in the system prompt handling, as referenced in #4228.
mixtral:8x22b-instruct template: https://ollama.com/library/mixtral:8x22b-instruct/blobs/138b3322e0da
Mixtral 8x22B template in docs: https://github.com/ollama/ollama/blob/main/docs/template.md#mistral
#### Steps to Reproduce
1. Input the following system prompt only into the `mixtral:8x22b-instruct` model.
```bash
curl http://localhost:11434/api/chat -d '{
"model": "mixtral:8x22b-instruct-v0.1-q4_K_M",
"stream": false,
"messages": [
{
"role": "system",
"content": "Hello, I am Ollama. I am here to help you with your questions. What would you like to know?\n\nWhat is the capital of Japan?"
}
]
}'
```
2. The Ollama log shows that the `prompt` field is empty.
`prompt=\"\"\r\n`
```bash
{"log":"time=2024-10-04T08:03:17.462Z level=DEBUG source=routes.go:1417 msg=\"chat request\" images=0 prompt=\"\"\r\n","stream":"stdout","time":"2024-10-04T08:03:17.462471779Z"}
```
4. The output is unrelated to the input content.
```bash
{"model":"mixtral:8x22b-instruct-v0.1-q4_K_M","created_at":"2024-10-04T08:03:40.224484198Z","message":{"role":"assistant","content":",\n.\nA new study by the University of Maryland and Johns Hopkins Medicine has found ..."}
```
#### Expected Behavior
The `mixtral:8x22b-instruct` model should work with system prompt only, similar to other models like `gemma2:27b`.
#### Actual Results
mixtral:8x22b-instruct, system prompt only: NG
<details><summary>Results</summary>
<p>
input:
```bash
curl http://localhost:11434/api/chat -d '{
"model": "mixtral:8x22b-instruct-v0.1-q4_K_M",
"stream": false,
"messages": [
{
"role": "system",
"content": "Hello, I am Ollama. I am here to help you with your questions. What would you like to know?\n\nWhat is the capital of Japan?"
}
]
}'
```
Ollama log:
```bash
{"log":"time=2024-10-04T08:03:17.462Z level=DEBUG source=routes.go:1417 msg=\"chat request\" images=0 prompt=\"\"\r\n","stream":"stdout","time":"2024-10-04T08:03:17.462471779Z"}
```
Output:
```bash
{"model":"mixtral:8x22b-instruct-v0.1-q4_K_M","created_at":"2024-10-04T08:03:40.224484198Z","message":{"role":"assistant","content":",\n.\nA new study by the University of Maryland and Johns Hopkins Medicine has found ..."}
```
This result is not related to the input.
</p>
</details>
mixtral:8x22b-instruct, system prompt + user prompt: OK
<details><summary>Results</summary>
<p>
Input
```bash
curl http://localhost:11434/api/chat -d '{
"model": "mixtral:8x22b-instruct-v0.1-q4_K_M",
"stream": false,
"messages": [
{
"role": "system",
"content": "Hello, I am Ollama. I am here to help you with your questions. What would you like to know?"
},
{
"role": "user",
"content": "What is the capital of Japan?"
}
]
}'
```
Ollama log:
```bash
{"log":"time=2024-10-04T08:08:04.464Z level=DEBUG source=routes.go:1417 msg=\"chat request\" images=0 prompt=\"[INST] Hello, I am Ollama. I am here to help you with your questions. What would you like to know?\\n\\nWhat is the capital of Japan?[/INST]\"\r\n","stream":"stdout","time":"2024-10-04T08:08:04.464656724Z"}
```
Output:
```bash
{"model":"mixtral:8x22b-instruct-v0.1-q4_K_M","created_at":"2024-10-04T08:08:06.219305367Z","message":{"role":"assistant","content":" The capital of Japan is Tokyo. It's also the country's largest city and one of the world's most populous metropolitan areas."},"done_reason":"stop","done":true,"total_duration":1853131385,"load_duration":11283091,"prompt_eval_count":38,"prompt_eval_duration":348092000,"eval_count":32,"eval_duration":1361906000}
```
</p>
</details>
Other model example `gemma2:27b`, system prompt only: OK
<details><summary>Results</summary>
<p>
Input
```bash
curl http://localhost:11434/api/chat -d '{
"model": "gemma2:27b-instruct-q4_K_M",
"stream": false,
"messages": [
{
"role": "system",
"content": "Hello, I am Ollama. I am here to help you with your questions. What would you like to know?\n\nWhat is the capital of Japan?"
}
]
}'
```
Ollama log:
```bash
{"log":"time=2024-10-04T08:12:01.403Z level=DEBUG source=routes.go:1417 msg=\"chat request\" images=0 prompt=\"\u003cstart_of_turn\u003euser\\nHello, I am Ollama. I am here to help you with your questions. What would you like to know?\\n\\nWhat is the capital of Japan? \u003cend_of_turn\u003e\\n\u003cstart_of_turn\u003emodel\\n\"\r\n","stream":"stdout","time":"2024-10-04T08:12:01.40376108Z"}
```
Output:
```bash
{"model":"gemma2:27b-instruct-q4_K_M","created_at":"2024-10-04T08:10:22.460284079Z","message":{"role":"assistant","content":"The capital of Japan is **Tokyo**. ๐ฏ \n"},"done_reason":"stop","done":true,"total_duration":6504495197,"load_duration":6057933332,"prompt_eval_count":42,"prompt_eval_duration":66685000,"eval_count":13,"eval_duration":376322000}
```
</p>
</details>
#### Potentially Related Issues
- https://github.com/ollama/ollama/issues/5547
- https://github.com/ollama/ollama/issues/6176
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12 | bug | low | Critical |
2,565,821,163 | go | net/http: TimeoutHandler prevents use of ResponseController | ### Go version
go version go1.23.1 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/marius/Library/Caches/go-build'
GOENV='/Users/marius/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/marius/workspace/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/marius/workspace/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.1/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.1/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/marius/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/9c/rl48j6r51g37vvhq4vdywz280000gn/T/go-build1640827140=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I have a `http.Handler` that uses `http.NewResponseController` on the response writer to control the response behavior. If my handler is wrapped in a `http.TimeoutHandler`, calls to `ResponseController.SetReadTimeout`, `ResponseController.EnableFullDuplex` return an error saying `feature not supported`.
https://go.dev/play/p/Bar7glioldQ contains an example for reproduction. Unfortunately, running the code in the playground often fails with an i/o timeout, but locally this problem does not appear.
### What did you see happen?
Any interaction with the response controller functions returns an error saying `feature not supported`. Running the linked code locally produces the following output:
```
2024/10/04 10:35:12 feature not supported
Hello, client
```
The reason seems to be that `TimeoutHandler` wraps the response writer into its own `timeoutWriter` (https://cs.opensource.google/go/go/+/refs/tags/go1.23.2:src/net/http/server.go;l=3658), which neither implements the methods for `ResponseController` (https://pkg.go.dev/net/http#NewResponseController) nor an `Unwrap` function returning the original response writer. The easiest solution might be to add an Unwrap method to `timeoutWriter`.
I also checked if other handler wrappers in `net/http` interfere with ResponseController. `AllowQuerySemicolons`, `StripPrefix`, and `MaxBytesHandler` don't cause issues as they don't wrap the response writer.
### What did you expect to see?
`ResponseController` should be usable together with `TimeoutHandler`. Running the linked code should produce the following output:
```
Hello, client
``` | NeedsInvestigation | low | Critical |
2,565,829,803 | ollama | "CUDA error: an illegal memory access was encountered" during image processing via minicpm-v:latest model | ### What is the issue?
The crash happens while processing the _png_ image via minicpm-v:latest (1862d7d5fee50b69f6e3007ec999145ab38f17688251495f87669eb81e9dd97c) model. It occurs only on specific _png_ image. Other images are processed without any issues.
The same image was processed without issue via llava:latest (8dd30f6b0cb19f555f2c7a7ebda861449ea2cc76bf1f44e262931f45fc81d081)
Example of request:
POST: http://OLLAMA_IP:11434/api/generate
Body:
```
{
"model": "minicpm-v:latest",
"prompt": "Extract all visible text to markdown blocks.",
"images": [
"iVBORw0KGgoAAAANSUhEUgAAA2oAAACUCAYAAADiW9r/AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAB2tSURBVHhe7d1Na9xWuMDxp5e7KBS6VBICl5JNuxmXhHZxoYULDnjo5kKXpba/Q0GfQtDvELt02WUZgw0X2l1DQj2bdlO6CYm1LBS66z3P0TnS0cvMHI3ksez8f+BkRiNppPMmPTpHmnf+NQQAAAAAMBn/4f4HAAAAAEwEgRoAAAAATAyBGgAAAABMDIEaAAAAAEwMgRoAAAAATAyBGgAAAABMDIEaAAAAAEwMgRoAAAAATAyBGgAAAABMDIEaAAAAAEwMgRoAAAAATAyBGgAAAABMzDv/Gu71KJanqSwepJI+TdwUL5eLLJNF7t4mc0nTfWnOBQAAAABvu1F71PLzTE4u3ZsaF6TdP5LM/J9lqcxlIdnp0n0OAAAAAPBGCtQ0EEslO/PdZQ2XC1nkicwPZm5CIvvHc0nM9IsViwAAAADA22qEQM0PaZzJUXYkPhQL5fmVic2eyCwc55jM5EmSy4tLIjUAAAAACI0QqCWyn+pwxu4gTV29NsHY/aTzfrT8tQniAAAAAAAlnvoIAAAAABNDoAYAAAAAE0OgBgAAAAATs5NA7d6DRORNLl2PDUke3HOvAAAAAABqJ4FakphgLH8hyzBSy5fyIk/kyR4/eQ0AAAAAod0MfdybyzzJZfHswvWq5XLxbCG5mb5PnAYAAAAANTu6R00f4Z/KXBaSmf/TNDOv5pIernqgPwAAAAC8vd7513CvAQAAAAATwFMfAQAAAGBiCNQAAAAAYGII1AAAAABgYgjUAAAAAGBiCNQAAAAAYGII1AAAAABgYgjUAAAAAGBiCNQAAAAAYGII1AAAAABgYgjUAAAAAGBi3vnXcK+3c3ki6elSJJlLmu5LYifmcpFlssjtm8Zn05GfZ5KdmY2c6Patotv9TI4lfepS+xr3Y8i6m9tp5ReSPRM5vrH0XspJemL+FZkdZnK0V0wdYnmaysuP/brGXz+67Dadr6OOddYPjKaZvlpPTy7Ni70jyQ5ndhr6Gzsd6+3nFPLpLrXh1blYcpCaunDl9i2ReZrKvlYNfw63zt7/yOzy/+5ImnS7De1x8zh05epKkbcd263nW9nClIIgv/30Gz0P2+QOnketyosI19Kjlp8/q4I0dT+ZaGG4ffQgZivqxHVupx4QbEG9K/Qg6E4qgB5uSz2+rUjf24D2E9PwVrUXd+487O673qGPekUsy7h6iQmZyZGWSfN3F68KAsB10ivcHNdvi0SS+8Wre8mKy+X+PM3+HZkjZKHMZ5vX/+Wm4iYlyb3iRWznR7JvgjLNw349OJiWkYc+zmRpCkWtN81odV2u7AKsuunDZfRqR3jVrdnN2z0saXPXafdywXSna/nmNlXr6N6HVftcX8+6LtFqvZVi/tllsB/HIs/KqyUd6yu3w9kwvKR/GnVv52efifz8c21ikI9VXlmrviuZmYPIUpa6Gj9Pc3+69rmmXS5q+3jwSrJyGMi6dTW22dIg8LG8DNb/+NcgfzvSOj7/vfb39iufyq9Dv28urzK/Pv/94Xd0bFPPMuRtqsfl53smny9NPutEv+7Gd84Oj0ROq3Ru1zOnsW1rv2ONsIwcPX4hJ2XZ1zyvTm6q9QfrLLdJ09JW0M563Mr3YLmZ2b/lpS5Uzbu6DnoddXHFdtXXX5WJmUmnpUknVa6/NVQq3P6wfqXy8Cz4/lY617dPy8P8ddZOP6Nedlakl7W6nUzOgrz5+GWwD/U8VJvTtq25TLMdC8vGkZxU+9OYT9X31+iYp5UPrfSN2I9mfVmbtoVmGQ/rxvD2U+v19eVTXB6EZXhNG76hfm7Ow3YadG1/fT2b86epSCNxy/n60U7LQn3fq22JTBNn7TaX6Wa2IX0oi7L8uW2qlclV2xloluGO7VndZq1uL3R7N9XplZrf11huuzpjuH0Vd+z061lZ7oMyqustzxUDK4dNqog2Zm05r31/eK6x4Xtr5a061quu5frWkXL+zvOARr3syvO1x0CnNo/5/PCJvDit8mLd9jXdzMNETJQ/t4Uqlxe2cXPypbywb2fy2H3eNTQiNwf/eiKNRzOwWZDttPNqWqtgqnzhKptptB8XObD8tdrG/PKF2RsjeSIz+3HXvuWm0epYdyzdhrDBsusLCpwWnNrnhp12UZ+2QUwaRdOK3DhY2f1oTlOmfNggTekVpWYDbek+Z3KxxaZUeeg10q+nVjkxaV2l0Tb53z6wq+b3rC+foeb+mfcmiKinfWOercpQz3rsG04jeXDPzNjO52XQcJf6bFvzO2KZdKyCNNWdJ+PK3Umguid6YXxzHew6ATFqZdBrr9/zQZq2aw/1yrym58ZyVFieNr6/8d3Nz7U8tMv+NbSTrX0weRiUkc1p29Y6oVO2Pe4qe0GAoBrzxdRf/b5WPjT2a+N+TLz9vI58Km3IA6+VF2a5mPrTWk7V0iemLR+n7CdPTZqYZYoTw0T2U1NWNwU/a6xPkz7brPkZlj99r9vWnNZRhzwtI5vafH3frCvm001ls1edDnTWTV2u85ymZ51xPWQ+UCnydv3FiW3FtjGtfG3tk2rvV3d739Y81uty25W3Dl3nGs18auZdTHlqzWM+t0HadkYO1IpGID1wR3qNUFcUotnHRTORv1yWG++DmeRgbhuR6l43jUC1Aps/E5XbtZuEiMqIPkwmLew69SpO/fvys4XLhKW8bMxT7u+bvNj+vSduG1+6ZUxD/rLYy9mBi8wvF8W+abTuv8tdqViedTUERdr6tNSrCl3d2XplpZZO5faaAn1WbE05j1l+rjOZgljsd4SNadS9nf/7v+Z/fyXG7bM2NkuzjE0GO1+xvmLZpSw6Drjltuu63rwq0qlMQ7c/ZmrtAkAPtfVbPv2adP/99/nlGge/IG99epTlvXf+G3kuV/ZFlfZ+vdVFgc3ls8bV0XJ/zXfkzWnmW81kY7sy1L8eV/NpGSkvcnTtU6nvttW/o49y/UEd6yqr3eLqcYvPE3fFeWM75S96dZXB10UpqgnX7yZZQd3SbfTlrJ0GvowEWvXSfLcv/+U+dK0rcC3tZFAOy3L/orgIFHUMaLt6XWxJ2Y75ffHrrVnz/VH115Q3dwLZ+j6TXjbIitmPqbef69Jpy3yqrMuDQFf9Cc5ZSrX6E5GHMW35NseIXViXJj232ZffMn20EWlNe+XSqimuzV/fZq1uL/rVaceUy2fNulmWcxN0tk7wg+3aWGeGs4Gd/x6XR93Hv4g2JqachzqOMWFnxkpd5c3X8cF1pH4eEHM+GnMMLPerY9u38Z/u/93be2wbtKUt9PumYvhgJpEne7rbVXCTHBxXB1rbG7ewJ3c2MfZ8lg9XnRDq1a7Uvqq4TEiKQmmZStm+ImmYeZ4kWoCKgjy73+wpDDLSRuuL4rXnGia/y9FMoSh6KvV1ItpPUG5b2Vtpvvs0ldbeFTtXvFkjLo2Kd5tVDZLtXTkrXnvFCWWwsnD/1P2H5lNzgPBpqJXCVLp993Fv4fp9+XRvt1EG5ca9B+ZVcPKzVf6XeerT3jWQ9kMvonwG/AWTcH+7pllblaEt6vHePDixDpcvLuDY10/N67PgClbfbat9Rw+1OlbtQ6usjsocTA6q9Imrg3rl1dWEziuAofr6Q2EZVvbgZF/plcyOHrtAtWwxysAf+K0gSOhKT+9a2skw730b4t5u2775+u3bMT3Ql+nfFH6/eT1PzEmAWbEGR/tPI+qvmVf326SwzP1JVpjfRn4esR8Tbz+vI59Ka/PATTfWteGVZv2JOUfY3JZfS9kfwXjHNX+uZ17pxW17ktw1bYXINr9PmxXqVaedslyaunBcBkAmGDTlY6FtsL14P7N5b41dZ8YS0cZElfNSWEeK8+Bl0M6vE5a3mTkHSExwplun2Tu4joTtgEn5mPPRzeWpWk/3tvd3g7+jVg1vtFfvysjYDw2sNG+CtRVo53J59ab4X7taU9O4rn5yTn34Y1l5bUWM4Q40O9R5lb03n0Yj6boqEzINx7G/gqNsZS3ypt1Lc5usyn/TMJZX3VRxkNf9bQ4FWF8+PTecraZrWpxNZWjcenxPHvZYfJzybTRu4r6ZtmgdXwersrE+SOvHDoex640/4emS59350S89d91Orm7fkqfHZe+Qsgd6m06NYTRGfaht9bCHQkT99UHu1tx+3Nn2c/NxaH0eDBXTBse05evs/hxhuOY214dZF7qmbce3+du2WX3qdEvzYR/2QsMtEtXG9DnX2FbjfMRd4Igzch1x56NbHwN7bXvdDQZqpqkKhj9euMg4eTxrFejiykjFd0lfG72yqFe3Gn+269IHlIbv+iy7ewPh8MeF296ypyJUdgeHfxFDobZWdfXW/moHjQjr0mgLZTdy+Bdx064fo23/gn240aEhffTN/3D+sstdDyKNoQDGuvI5zHZlaLt6vOok6kpedS4+UvlepXHx4NrbonXW1MH83A//qtJjyNCLQjUcpiyHQRkcw8r03HU72bt9K4ZQteerhsx49YsGpjyFQUVM/e1z0rdhP259+7nlcWhtHgwV2wZvasu9XZf9Mex0m9e1+UParPg63dK8yDz44sqOxbQxOznXaFx0MecQnZf4Ri5vq89HY8rTiovIq7Y9wo0GamUPU24y3F7BC7pZza77Xqn87Fl1c3Me3NvQDHxcV6d9WZ6oxCuDq3J7DB02FFxFKa8EmwNE0WVdDc2qscN49MXSdfFWwx5Vue2msPt9KyP1DTerqt49BHY4pr7ITUHzKVNdxYu9ghqTRqHO7SzzKRgKWp4YVFdpNl1ZbKWXVhx/shGUhV1oBiGbbJX/zXTWoQhlI1FcPYoun9vYqgxtUY8bfC9LeALTqt8jle+NwnK/bh+CA7Uf996ldz02YupgGfCUQzuq4RhbKw805sTIDWOphqD107kPQXp6u24n+7ZvhXabpQd6f2LX2oZgX6qTnWLIV1T9La/MhieM9bIedSy7xe3ndvkUWJMHQ0XlYURbPrTs34SdbnNMm9+zzarqas867YTl8llZN826/PZFj6ragXV1PKKNudZzjUB40ag8lrqRd+OWt4jz0ajyVF1c7tz2LdzcPWqWCcxMBL4sI9R6Ia7uQzEV0SRWbQSqOaiUVwW1cthEKArS1mxw5e6baYx5Lu+Psb9joRnWMSa2wRYi/9S0ZgUNxsU39y0c17qSbejNvKbxmLtJ64XjpItlS2VFixCTRqFgO4/cJLNkkU96YuDH7bbSMwzau5XloysvdtwgFmOa9cpeXG5slf/lMh3l3DVcxe+sxJXP/rYrQ7H1eJVq+XX1e0D51vl1OTNfzKOX2/dDVA18Wedj0z+sHyPWwVyDWzukvJEWQ7iDt4agrXzsq3MfzAFOEz88mo3YTj52k9bq275ZVdnrurehPJkoddUDF1BH1d/quNn6vrKsR+zHJNvPdHOeqq3yKbQmDwaKaoMj2nKblxvLvi6vbWORdtfTY9XD0PraS0yb37PNCtvjXnXaCcple7nmcNfNNOiwT54ML6KMJjgPa617cxuTXF7nuUagY/3lyLuRy1t5H1nrO/35aFx5Wr2e7dxsj5pRXoEw2oW/uFmxeQJjn8gSFixTOdJGQdPfXtimWHedMNmbSH3QYAp1rXtXT+78U2yaTwMKDnbtfTONjKkk4RhotemEbdv9sroqpG5/xMlpaGMaGZ3b2XVSrnnX6jrWfI/Zz+75WuXj2pg8PG5ue6xt8r97GZuvPg/7lM9tbFWGIuvxSu18XlW+xijfa5n1HYXp29y2Zvqbb54ftsvIoHpsbKqDOqSt9rktI247yqfR9mX2tVZX9eTQb0f/JwXq76yFZdm+bw1z3X07GdO+tXSVvVr6BMy8rbzxyzbLz4r6W3t6m9co65v34za3n1vmk7cuD4aKysOItnzFPJvK/s3a8TZ35VutHsS1WdHHk9ry3XRfO7epo67dCLNfMXmxsY2JKufDNfOmXsdHLm8bz0cjj4Gt9Zj5Os4DYg3/weuh9AqGXhGpJQYA7JC2Q2cPaye6uEbBU8KqA68OMylu0I4+4b5FtOdHr7R3nwBiF+5mHmjPyEIemhPWG+9Rw+hsr9rrOW3GW+zGetTKcaQ2SNMDc8xwBQAYn33Mb/NJXbg+5f0l5ligQ2v0WJD6p2iNc68Q8FawPeTjPS0RU1Lc91V/SineNjcWqBXjuB3tQr1jV08B3BKXJ3LyZt4aPo3rtGLol5k+iXttgFthKSenV6bOMBrpLsrPn8niPufHb7ubH/oIAAAAAKi58YeJAAAAAADqCNQAAAAAYGII1AAAAABgYgjUAAAAAGBiCNQAAAAAYGII1AAAAABgYm7R4/n11/dPzL8is8NMju5fSJYtJJeZHGX6GyK5XGT+B1NXSeSzz0R+/tnMlMwlTffv5A/cLk9TefmxSaM9N2FyNuUlAAAA8HajR+1O0WA1lZNL9xYAAADArXSLArV78tB2fyXy8L6d0JDIfppJlhV/6YHrK9OeMzctM0HMf79XTMZN2pSXAAAAwNttnKGPuR+6VrFD2oKhd/m5CZTOgjm2GHqoQ/pOLv3wODd8bsV6yu9rfF6bfizyrNzuROZpKvvhipr7tXck2WE1MK/YHrPkQSrz11nZk6Xv06dJ+Xk4rVIN/7Oi0qNjeGe5TY31WauHEsblR/v7mvla7qPZjvTBIljn+mGMffISAAAAeNsMD9Q6grRCFfi0ggLvGk/MNwZqnYLg4vJE0tN62GMF6wsDsbpEkiSXvPE1ZZCzMs3WBTer78ErgsCr6EAtLj+6Ar9CGHSuTgODwAsAAADYyuChj/nlCxtw6Ml7MbzQBwa5vHpjX8jV6yIoKOdJ58XJe/5Clh3xwq5o4FTbHhOWvLRBhwmKzooQpZwnS2WuM+ULWbQCEw2IdJ5q3/O8Oc2kg4vclmdFkFalme+lWsrifEWC5Et5YRcywU9tGfPR6yvzr36f20Z9Z7e7O+iLyY/lqQ/S/H6YP9ebmJ89k4vWZmpgXsxXDju94fwFAAAAbqvBgVrytDjZ90P90o5emHsPihP3/MzMl5r5L2cu2GgMNdwlE/DM/RC+JJF77qXlgyKj2Cf9q3qzfMBV2nvsAqKZPPbr7Jpm+WCwSg/9871SRdDVIdkv0kx7qLS3L1imr835UW3j7DAI9vZMmtnPc3lx2UyDeZmXyd4TetEAAACAAYY/TMQFDesCh+TpcdnTo6oApXto3dQ1g6nkQS3Ms7qmRXmT2962Nh2K6ILGriGZPcTnR/NhH4kkPPwDAAAAuHYDA7VqiGA1JK9ruF39iYzVwyjWDPWbhGo4X+0veKDIENWwyuBvxT1d+fnCBVHVNoUP9egnNj+q4asFE0TW3gMAAAC4DgMDtSt55c7rZwcuwLh82eiV0YdgFD1BmQsCyodqGCuH+t2kZCZP7M7ksvCBqNkr36O17ZDDQjUUcnl2Yb5BtdOoyd9XVg0xrIYndmkNzyzF5EewjeW9asblwg3/TOTJXlc4CQAAAGAMAwM1/3tYekJfnPy3h+Ulsn9Q9EB13ZM1+9h9du4+GzisbxzVNldDO13AEt7btqXZgX94x0Iyu25//9tM5rVH+Ff8fWWt7VmhSOus86EfMflRbmMQoPq8SQ6Oe95bWAWHw4JcAAAA4O0wMFDTIXT1oY7aO+Of+rf81YUSjd8fKxRD+LYfvnfNurZ5rMfN64NByidNet2P0vf0oS21tNLt8+soezFNfhw319shJj/sw0uqp0h6Nn9XBJMAAAAAxjHOD16PxP6+1+v5aPeAAQAAAMBtNPypj6PJZfky3/5piQAAAABwR0wmUMvPn8ni/hHD6gAAAAC89SY19BEAAAAAMKmhjwAAAAAARaAGAAAAABNDoAYAAAAAE0OgBgAAAAATQ6AGAAAAABNDoAYAAAAAE0OgBgAAAAATQ6AGAAAAABNDoAYAAAAAE0OgBgAAAAATQ6AGAAAAABPzzr+Gez3Yb7/9Js+fP5c//vhD/v77b3nvvffk0aNH8sknn8hHH33k5gIAAAAArDNaoPbDDz/In3/+KZ9//rl8+OGH8v7778tff/0lv//+u/z000/ywQcfyJdffunmBgAAAACsMkqgpkHaP//8I1999ZWb0vb999/Lu+++S7AGAAAAABsMDtR0uOOPP/4o33zzjZuy2rfffitffPFF1DDI5WkqJ5fuTUsi8zSV/cS9BQAAAIA7ZHCg9t1339mhjp9++qmbstovv/xih0J+/fXXbkqkyxNJzx5Kmu6bEA0AAAAA7rbBT33UB4dooBZD59P5x7GUk/RELs4zE8ClkmYXkucXkqWZXORuFkN75tLTpXtnaNCn87u/1b12AAAAAHAzBgdq+nRHfXBIDJ1P5x/PUhav55JlmQnQInrbNEg7FTnS+e0yc7k6rQd2AAAAAHDTBgdq+gh+fbpjDJ1P5x/T7OOZe7VJLhdnS5kdHkm5RLIvxwciCzMdAAAAAKZicKCmv5Om953F0Pl0/ptxJa9yNxQyGPqYndGdBgAAAGBaBgdq+mPW+jtpMXQ+nf8mzQ7dsMfw7zC2Vw4AAAAArt/gQE0fta8/Zq2/k7aOfq7zxTya/3rck4eJyFVODxoAAACAaRscqCn9EWv9MWv9nTR9BL+/Z03/1/c6fSc/dp0kJhzL5cWlC8byC1mUT3VMZP9gJvnZs+DhIblcZKlk5wRvAAAAAKZj8O+ohfTHr58/f24fwa9Pd9QHh+g9aTrccVBPWufvqBWP55fDTI723CRln+zoHg6SzOXo8Qs50SdDuuGN+XlWuy8tOUglfcqvswEAAACYjlEDNQAAAADAcKMMfQQAAAAAjIdADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACbmnX8N93qw3377TZ4/fy5//PGH/P333/Lee+/Jo0eP5JNPPpGPPvrIzQUAAAAAWGe0QO2HH36QP//8Uz7//HP58MMP5f3335e//vpLfv/9d/npp5/kgw8+kC+//NLNDQAAAABYZZRATYO0f/75R7766is3pe3777+Xd999l2ANAAAAADYYHKjpcMcff/xRvvnmGzdltW+//Va++OKLyGGQuVxkmSxy97ZmJkfZkfl3veVpKidyJNnhpjkBAAAAYDoGB2rfffedHer46aefuimr/fLLL3Yo5Ndff+2mrFMEaq8OMjnac5N6IlADAAAAcBsNfuqjPjhEA7UYOp/OPyYNxtI0+Dtduk+alnKybr7Lk9p6Ti7ddAAAAADYscGBmj7dUR8cEkPn0/nHYnvM3swlzTLJ9E97zkzA1Q6ytHfuRK4O0mK+LJX5mxPJzt24Sg3STkWO/HrSuVydZnLROewSAAAAAK7X4EBNH8GvT3eMofPp/H20eszSKoCaHWpQtS9J8VZk7/HG+9YKieyb9aRPdUkTxJ0tzbqCe96SfTk+EFmY6QAAAACwa3fiHrX83ARsZ1X3lwZwukztHjXba+YCr2RuAj4f4OmQyBPzb4c97m8DAAAAsHuDe9T0x6z1d9Ji6Hw6/1g0QNNetuzsnhu2uOZJkBp0uWGNSb4w/6e1+9Rs75wf+uj/CNIAAAAA3IDBgZo+al9/zFp/J20d/Vzni3s0f4xcli9zF2BtflR/Kdkv7mmzvWwvZSn35GEicpVzQxoAAACAaRgcqCn9EWv9MWv9nTQd3ujvWdP/9b1Ov64fu64CrOKBId13lelnafXwEGP5q5nT3tOWyP7BTPKzZ8HDQ9rzAwAAAMCuDL5HLaQ/fv38+XP7CH59uqM+OOTRo0d2uGP/nrSIe9TyC8myhZmzMDtM5eFZJov7xb1ltXvUGvPW71MzHzfuc0sOUvewEQAAAADYrVEDNQAAAADAcKMMfQQAAAAAjIdADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmhkANAAAAACaGQA0AAAAAJoZADQAAAAAmReT/AYCE+fr/UmbjAAAAAElFTkSuQmCC"
],
"stream": false,
"keep_alive": 0
}
```
Ollama log:
```
ollama | [GIN] 2024/10/04 - 08:41:03 | 200 | 31.22349636s | 192.168.100.20 | POST "/api/generate"
ollama | time=2024-10-04T08:41:23.009Z level=WARN source=sched.go:137 msg="multimodal models don't support parallel requests yet"
ollama | time=2024-10-04T08:41:23.168Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-5d1301f5-9e77-83c1-9e2f-eff9c34008d5 library=cuda total="7.4 GiB" available="1.2 GiB"
ollama | time=2024-10-04T08:41:23.808Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0 gpu=GPU-5d1301f5-9e77-83c1-9e2f-eff9c34008d5 parallel=1 available=7857242112 required="5.8 GiB"
ollama | time=2024-10-04T08:41:23.808Z level=INFO source=server.go:103 msg="system memory" total="15.6 GiB" free="3.1 GiB" free_swap="0 B"
ollama | time=2024-10-04T08:41:23.810Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="112.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="3.5 GiB" memory.weights.repeating="3.1 GiB" memory.weights.nonrepeating="425.3 MiB" memory.graph.full="303.2 MiB" memory.graph.partial="728.5 MiB"
ollama | time=2024-10-04T08:41:23.812Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --mmproj /root/.ollama/models/blobs/sha256-f8a805e9e62085805c69c427287acefc284932eb4abfe6e1b1ce431d27e2f4e0 --no-mmap --parallel 1 --port 33077"
ollama | time=2024-10-04T08:41:23.813Z level=INFO source=sched.go:449 msg="loaded runners" count=1
ollama | time=2024-10-04T08:41:23.813Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
ollama | time=2024-10-04T08:41:23.813Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
ollama | INFO [main] build info | build=10 commit="3f6ec33" tid="140630251470848" timestamp=1728031283
ollama | INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140630251470848" timestamp=1728031283 total_threads=8
ollama | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="33077" tid="140630251470848" timestamp=1728031283
ollama | ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ollama | ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ollama | ggml_cuda_init: found 1 CUDA devices:
ollama | Device 0: Tesla P4, compute capability 6.1, VMM: yes
ollama | time=2024-10-04T08:41:24.065Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
ollama | llama_model_loader: loaded meta data with 22 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0 (version GGUF V3 (latest))
ollama | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama | llama_model_loader: - kv 0: general.architecture str = qwen2
ollama | llama_model_loader: - kv 1: general.name str = model
ollama | llama_model_loader: - kv 2: qwen2.block_count u32 = 28
ollama | llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
ollama | llama_model_loader: - kv 4: qwen2.embedding_length u32 = 3584
ollama | llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 18944
ollama | llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 28
ollama | llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 4
ollama | llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
ollama | llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
ollama | llama_model_loader: - kv 10: general.file_type u32 = 2
ollama | llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
ollama | llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
ollama | llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,151666] = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama | llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,151666] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama | llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["ฤ ฤ ", "ฤ ฤ ฤ ฤ ", "i n", "ฤ t",...
ollama | llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151644
ollama | llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 151645
ollama | llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 128244
ollama | llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
ollama | llama_model_loader: - kv 20: tokenizer.chat_template str = {% for message in messages %}{% if lo...
ollama | llama_model_loader: - kv 21: general.quantization_version u32 = 2
ollama | llama_model_loader: - type f32: 141 tensors
ollama | llama_model_loader: - type q4_0: 197 tensors
ollama | llama_model_loader: - type q6_K: 1 tensors
ollama | llm_load_vocab: special tokens cache size = 25
ollama | llm_load_vocab: token to piece cache size = 0.9309 MB
ollama | llm_load_print_meta: format = GGUF V3 (latest)
ollama | llm_load_print_meta: arch = qwen2
ollama | llm_load_print_meta: vocab type = BPE
ollama | llm_load_print_meta: n_vocab = 151666
ollama | llm_load_print_meta: n_merges = 151387
ollama | llm_load_print_meta: vocab_only = 0
ollama | llm_load_print_meta: n_ctx_train = 32768
ollama | llm_load_print_meta: n_embd = 3584
ollama | llm_load_print_meta: n_layer = 28
ollama | llm_load_print_meta: n_head = 28
ollama | llm_load_print_meta: n_head_kv = 4
ollama | llm_load_print_meta: n_rot = 128
ollama | llm_load_print_meta: n_swa = 0
ollama | llm_load_print_meta: n_embd_head_k = 128
ollama | llm_load_print_meta: n_embd_head_v = 128
ollama | llm_load_print_meta: n_gqa = 7
ollama | llm_load_print_meta: n_embd_k_gqa = 512
ollama | llm_load_print_meta: n_embd_v_gqa = 512
ollama | llm_load_print_meta: f_norm_eps = 0.0e+00
ollama | llm_load_print_meta: f_norm_rms_eps = 1.0e-06
ollama | llm_load_print_meta: f_clamp_kqv = 0.0e+00
ollama | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama | llm_load_print_meta: f_logit_scale = 0.0e+00
ollama | llm_load_print_meta: n_ff = 18944
ollama | llm_load_print_meta: n_expert = 0
ollama | llm_load_print_meta: n_expert_used = 0
ollama | llm_load_print_meta: causal attn = 1
ollama | llm_load_print_meta: pooling type = 0
ollama | llm_load_print_meta: rope type = 2
ollama | llm_load_print_meta: rope scaling = linear
ollama | llm_load_print_meta: freq_base_train = 1000000.0
ollama | llm_load_print_meta: freq_scale_train = 1
ollama | llm_load_print_meta: n_ctx_orig_yarn = 32768
ollama | llm_load_print_meta: rope_finetuned = unknown
ollama | llm_load_print_meta: ssm_d_conv = 0
ollama | llm_load_print_meta: ssm_d_inner = 0
ollama | llm_load_print_meta: ssm_d_state = 0
ollama | llm_load_print_meta: ssm_dt_rank = 0
ollama | llm_load_print_meta: ssm_dt_b_c_rms = 0
ollama | llm_load_print_meta: model type = ?B
ollama | llm_load_print_meta: model ftype = Q4_0
ollama | llm_load_print_meta: model params = 7.61 B
ollama | llm_load_print_meta: model size = 4.12 GiB (4.65 BPW)
ollama | llm_load_print_meta: general.name = model
ollama | llm_load_print_meta: BOS token = 151644 '<|im_start|>'
ollama | llm_load_print_meta: EOS token = 151645 '<|im_end|>'
ollama | llm_load_print_meta: UNK token = 128244 '<unk>'
ollama | llm_load_print_meta: PAD token = 0 '!'
ollama | llm_load_print_meta: LF token = 148848 'รฤฌ'
ollama | llm_load_print_meta: EOT token = 151645 '<|im_end|>'
ollama | llm_load_print_meta: max token length = 256
ollama | llm_load_tensors: ggml ctx size = 0.30 MiB
ollama | llm_load_tensors: offloading 28 repeating layers to GPU
ollama | llm_load_tensors: offloading non-repeating layers to GPU
ollama | llm_load_tensors: offloaded 29/29 layers to GPU
ollama | llm_load_tensors: CUDA_Host buffer size = 291.59 MiB
ollama | llm_load_tensors: CUDA0 buffer size = 3926.95 MiB
ollama | llama_new_context_with_model: n_ctx = 2048
ollama | llama_new_context_with_model: n_batch = 512
ollama | llama_new_context_with_model: n_ubatch = 512
ollama | llama_new_context_with_model: flash_attn = 0
ollama | llama_new_context_with_model: freq_base = 1000000.0
ollama | llama_new_context_with_model: freq_scale = 1
ollama | llama_kv_cache_init: CUDA0 KV buffer size = 112.00 MiB
ollama | llama_new_context_with_model: KV self size = 112.00 MiB, K (f16): 56.00 MiB, V (f16): 56.00 MiB
ollama | llama_new_context_with_model: CUDA_Host output buffer size = 0.59 MiB
ollama | llama_new_context_with_model: CUDA0 compute buffer size = 303.22 MiB
ollama | llama_new_context_with_model: CUDA_Host compute buffer size = 11.01 MiB
ollama | llama_new_context_with_model: graph nodes = 986
ollama | llama_new_context_with_model: graph splits = 2
ollama | INFO [main] model loaded | tid="140630251470848" timestamp=1728031297
ollama | time=2024-10-04T08:41:37.378Z level=INFO source=server.go:626 msg="llama runner started in 13.57 seconds"
ollama | ggml_cuda_compute_forward: SCALE failed
ollama | CUDA error: an illegal memory access was encountered
ollama | current device: 0, in function ggml_cuda_compute_forward at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2326
ollama | err
ollama | /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
ollama | [GIN] 2024/10/04 - 08:41:54 | 500 | 31.825048562s | 192.168.100.20 | POST "/api/generate"
ollama | time=2024-10-04T08:41:59.776Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001895153 model=/root/.ollama/models/blobs/sha256-262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0
ollama | time=2024-10-04T08:42:00.027Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.252338468 model=/root/.ollama/models/blobs/sha256-262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0
ollama | time=2024-10-04T08:42:00.276Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.501732338 model=/root/.ollama/models/blobs/sha256-262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0
```
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12 | bug | low | Critical |
2,565,852,081 | pytorch | NJTs with length doesn't support H2D transfer | ### ๐ Describe the bug
Reprod
```python
x = torch.nested.nested_tensor_from_jagged(
torch.arange(10),
offsets=torch.tensor([0, 4, 8, 10]),
lengths=torch.tensor([4, 4, 2])
x.to("cuda")
```
results in
```
File "/home/vmoens/.conda/envs/tdmpc2/lib/python3.10/site-packages/torch/nested/_internal/ops.py", line 583, in to_copy_default
_tensor_symint_registry[new_offsets] = _tensor_symint_registry[inp._offsets]
File "/home/vmoens/.conda/envs/tdmpc2/lib/python3.10/site-packages/torch/utils/weak.py", line 171, in __getitem__
return self.data[self.ref_type(key)] # CHANGED
KeyError: <weakref at 0x7f715290dc10; to 'Tensor' at 0x7f715c327880>
```
This works if lengths are not provided
### Versions
nightly
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: nestedtensor | low | Critical |
2,565,863,658 | storybook | [Bug]: Inline HTML <script> tags get removed from DOM in rendered story | ### Describe the bug
I'm creating a new UI component library for an existing web app in Storybook/HTML.
Component HTML contains inline `<script>` tags, which are getting stripped out of story DOM:

These inline scripts need to be detectable via `document.querySelector()` inside global script, which are loaded inside `preview-head.html`.
They're returning `null`.
Is there a reason why inline scripts get removed from DOM???
### Reproduction link
https://github.com/basher/sb-test
### Reproduction steps
1. Load the story from reproduction link.
2. Scripts executing via `preview-head.html` cannot find inline script tags inside story DOM.
### System
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (4) x64 Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
Binaries:
Node: 20.11.0 - C:\Program Files\nodejs\node.EXE
npm: 10.2.4 - C:\Program Files\nodejs\npm.CMD <----- active
Browsers:
Chrome: 129.0.6668.90
Edge: Chromium (127.0.2651.74)
npmPackages:
@storybook/html: ^8.2.3 => 8.3.5
@storybook/html-vite: ^8.2.3 => 8.3.5
storybook: ^8.2.3 => 8.3.5
### Additional context
_No response_ | bug,html | low | Critical |
2,565,875,094 | godot | Fog aerial perspective does not blend seamlessly into the sky | ### Tested versions
`v4.3.stable.mono.official` [77dcf97d8], also on `v4.3.stable.arch_linux`.
### System information
Godot v4.3.stable.mono - Arch Linux #1 SMP PREEMPT_DYNAMIC Wed, 04 Sep 2024 15:16:37 +0000 - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (amdgpu) - 13th Gen Intel(R) Core(TM) i7-13700K (24 Threads)
### Issue description
The [documentation](https://docs.godotengine.org/en/stable/classes/class_environment.html#class-environment-property-fog-aerial-perspective) for `Environment::fog_aerial_perspective` says:
> If set above 0.0 (exclusive), blends between the fog's color and the color of the background [Sky](https://docs.godotengine.org/en/stable/classes/class_sky.html#class-sky).
So if this is set to 1.0, I would expect the fog to be the same colour as the sky. In places where fog density approaches 100%, I would expect to see the sky colour, as if the object has become transparent.
However, that's not what is happening:

This screenshot was made with a `PanoramaSky` consisting of blue at the top and red at the bottom. Fog density is set to 0.3, and aerial perspective is set to 1.0. The spheres have a green albedo. However, in the distance, the spheres do not become red/blue, but rather an intermediate magenta colour.
From a cursory glance at the code, it seems that the shader is reading from the radiance cube map to do its thing. Could it be that it's reading from the wrong mip level, causing too much blending between the colours? That's just a wild guess though.
### Steps to reproduce
Just open the attached MRP, which consists of the above scene.
### Minimal reproduction project (MRP)
[AerialPerspectiveTest.zip](https://github.com/user-attachments/files/17255993/AerialPerspectiveTest.zip)
| enhancement,topic:rendering,documentation,topic:3d | low | Major |
2,565,886,367 | rust | Tracking issue: Attribute refactor | While working on #125418 with @m-ou-se, I've interacted quite a bit with attributes in the compiler. I've got some thoughts about the way they currently work. I'm posting this as a mix between an explanation of the status quo and why I think that's an issue, in addition to also serving as a kind of tracking issue for these changes if I've convinced you that this is a problem.
# Quick Overview
From the ground up: There are several syntaxes for macros, one of those syntaxes is attributes which can have [several forms]. Attributes can be expanded, either as a user defined attribute macro, or as an "active" built in attribute like `#[test]`. However, some attributes are kept around for the entire compilation lifecycle.
These [built-in attributes] are never expanded. Instead, they are kept around and serve as markers or metadata to guide the compilation process at various stages. There are currently around `100` of these.
[several forms]: https://doc.rust-lang.org/nightly/reference/attributes.html#meta-item-attribute-syntax
[built-in attributes]: https://rustc-dev-guide.rust-lang.org/attributes.html#builtininert-attributes
# The problem
<details>
<summary>While most of what is parsed, is later lowered during [`rustc_ast_lowering`], attributes are not, mostly. </summary>
Many crates under `compiler/` depend on `rustc_ast` *just* to use `ast::Attribute`. Let's see what that means:
[`rustc_ast_lowering`]: https://github.com/rust-lang/rust/tree/master/compiler/rustc_ast_lowering
## Partial lowering and impossible states
One part of attributes actually *is* lowered, attributes of the form `#[key = "value"]` aka `MetaNameValueStr`. To be able to do that, the ast contains an enum `AttrArgsEq` that already has a variant for when eventually it is lowered:
https://github.com/rust-lang/rust/blob/11ee3a830b8537976d54805331cc626604afbb63/compiler/rustc_ast/src/ast.rs#L1697-L1700
For one part of the compilation process, the `Ast` variant is always active and `Hir` is completely unused, while later in the compiler the reverse is true. In some places people didn't realize this and they provided implementations for both cases while only one could occur,
while in other places they are marked as unreachable, like here:
https://github.com/rust-lang/rust/blob/11ee3a830b8537976d54805331cc626604afbb63/compiler/rustc_ast/src/visit.rs#L1241
Another case of partial lowering is the tokens field:
https://github.com/rust-lang/rust/blob/11ee3a830b8537976d54805331cc626604afbb63/compiler/rustc_ast_lowering/src/lib.rs#L948
Which is later extensively defended against, making sure this really happened:
https://github.com/rust-lang/rust/blob/11ee3a830b8537976d54805331cc626604afbb63/compiler/rustc_query_system/src/ich/impls_syntax.rs#L41-L54
### Parse, don't validate.
I'm a big fan of the blog post [Parse, don't validate]. Generally rust's type system makes this pattern the most obvious thing to do and it's what I teach my university students every year. However, that is exactly what we aren't doing with attributes. In [`rustc_passes/check_attr.rs`] we first validate extensively, and emit various diagnostics. However, every single attribute is later parsed again where it is needed. I started making a small overview, but `100` attributes is a lot

But basically, of the first 19 attributes I looked at, 5 are `Word` attributes and trivial, a few are parsed together, but in total I've found 11 completely distinct and custom parsing logics, not reusing any parts, spread over as many files and compiler crates.
I lied a little there, the parsing does reuse some things. For example, the attributes are turned into `MetaItem`s using common logic. However, that doesn't change the fact that attributes are effectively re-validated scattered around the compiler, and many of these places have more diagnostics of their own, that could've happened during the earlier validation. It also means that at a very late stage in the compiler, we are still dealing with parsing `TokenStream`s, something that you'd think we should abstract away a little after parsing.
An example of such custom parsing logic:
https://github.com/rust-lang/rust/blob/11ee3a830b8537976d54805331cc626604afbb63/compiler/rustc_middle/src/ty/context.rs#L1447-L1469
[Parse, don't validate]: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/
[`rustc_passes/check_attr.rs`]: https://github.com/rust-lang/rust/blob/11ee3a830b8537976d54805331cc626604afbb63/compiler/rustc_passes/src/check_attr.rs
### Flexibility
Finally, though I have fewer concrete examples of this, sticking to `ast::Attribute` throughout the compiler removes quite some flexibility. Everything has to fit into an `ast::Attribute`, or if it doesn't, you'd have to create more variants like `AttrArgsEq::Hir` to support something in the ast that shouldn't even be part of the ast, forcing you to add a myriad of exceptions in parts of the compiler where such an extra variant isn't relevant yet. Specifically, for #125418 we noticed this because we wanted to do some limited form of name resolution for a path stored in an attribute, which proved next to impossible.
</details>
# Ideas
<details>
<summary> Lower attributes during `rustc_ast_lowering`. </summary>
I've got 90% of a commit ready to do this, and it's what sparked the idea for this issue. It leads to some code duplication. I'm a little unhappy about it, because it forces a lot of changes across the entire compiler, exactly because attribute parsing now happens in so many places. However, it already means that a lot of assertions can be removed because at some part of the compiler, the fact that an `Attribute` can't have certain fields and values anymore becomes encoded in the type system. I'll open a PR for this soon, and we can discuss whether we think this is a good first step.
What also doesn't help is that `rustc_attr` currently has logic to validate attributes, but these functions are called in wildly different parts of the compiler. Some functions here validate actual `ast::Attribute`s from before lowering, while other functions validate new `hir::Attribute`s. Bugs here seem easy to make, since even though currently these are the same type, they don't always contain the same fields....
</details>
<details>
<summary>The "real solution": parse, don't validate</summary>
As I see it, what would make attributes so much nicer to work with, is if there was a place in the compiler (something like the `rustc_attr` crate, but actually good) where all attributes are turned from their ast tokeny representation into some specific attribute representation. Something like the following, based on the examples I've looked at in the table I showed a little higher up:
```rust
enum InlineKind {
Always,
Never,
Normal
}
enum Attribute {
Diagnostic {
message: Symbol,
name: Symbol,
notes: Vec<Symbol>
},
Inline(InlineKind),
Coverage(bool),
// ...
}
```
This structure contains only the information necessary to use each attribute, and all the diagnostics happen while parsing into this structure. That has the added benefit that this datastructure itself serves as great documentation as to what values an attribute allows. It's super clear here that a `#[diagnostic]` attributes contains a message, name and some notes. Currently, you'd have to make sure the written documentation for this attribute is up-to-date enough.
The translation from `ast::Attribute` to this new parsed attribute should, I think, happen during AST to HIR lowering.
I think the advantages of this should be pretty obvious, based on the examples I've given of the problems with the current approach. However, I could think of some potential blockers people might care about:
* some errors might now be thrown in different stages of the compiler (earlier). I'd say this can actually be an advantage, but I've not talked to enough people to know whether this is a problem anywhere
* A file with code to parse these attributes will contain code of many different features. I personally like that, but it also means that those features themselves become slightly less self-contained.
* Validity of attributes still needs to be checked on-site. (like `track_caller` not being valid on closures, given certain feature flags)
* Affects large parts of the compiler, also unstable parts, where people are actively opening merge requests and might run into conflicts.
Part two I have not worked on personally. I might, if I find enough time, but if someone feels very inspired to pick this up or lead this (or tell my why this is a dumb idea) feel free to.
</details>
---
Everything above was my original issue, that changes were needed to attributes. Everything below is tracking the progress of these changes
---
# Steps
## Already completed
- [x] remove attribute IDs from hir statistics https://github.com/rust-lang/rust/pull/132576
- [x] make clippy's attribute lints work on the ast instead of hir
- [x] https://github.com/rust-lang/rust/pull/132598
- [x] https://github.com/rust-lang/rust-clippy/pull/13657
- [x] https://github.com/rust-lang/rust-clippy/pull/13658
- [x] introduce hir attributes https://github.com/rust-lang/rust/pull/131808
- [x] split up builtins.rs into files for individual attributes. Also move types to `rustc_attr_data_structures` and rename `rustc_attr` to `rustc_attr_parsing`: https://github.com/rust-lang/rust/pull/134381
## Future
### Introduce new logic to systematically parse attributes
Sofar these changes might be nice, but neither makes attribute parsing actually better.
A lot of these changes are already implemented. I have made [this pr](https://github.com/jdonszelmann/rust/pull/5 ) to my own fork for me to keep track of everything, it contains a lot of the changes I want to make, but it's currently too large to review. I'm a bit further ahead, since I wanted to make 100% sure that what I'm planning works out. So, I've already experimented with converting around 15 different kinds of attributes to it, and testing the compiler on that. I didn't want to propose something that wouldn't actually work.
Below follows an explanation of some of it, but feel free to skip that if you're not interested in that right now. When I file this, I'll of course motivate it more. In the code it's already documented pretty well.
#### Define an enum `AttributeKind` that *exhaustively lists* all parsed attributes.
https://github.com/jdonszelmann/rust/blob/458f1d026c6dfe7345829326793db0f26fda6b77/compiler/rustc_attr_data_structures/src/attributes.rs#L129
It contains a variant for all attributes in the compiler, but when I make a PR for this I'll of course start with only one or two.
For a bit, [Attribute](https://github.com/jdonszelmann/rust/blob/458f1d026c6dfe7345829326793db0f26fda6b77/compiler/rustc_hir/src/hir.rs#L1004)s will be an enum. An attribute is either `Parsed`, or `Unparsed`. Unparsed attributes will basically be hir attributes like introduced in https://github.com/rust-lang/rust/pull/131808. At some point in the future, almost no attributes will be of the `Unparsed` type anymore, except custom tool attributes which we cannot parse.
#### Define "attribute groups", sets of syntactical attributes that are parsed together.
[Here's an example of that](https://github.com/jdonszelmann/rust/blob/458f1d026c6dfe7345829326793db0f26fda6b77/compiler/rustc_attr_parsing/src/attributes/stability.rs#L33). You can see the group for `#[stable(..)]`, `#[unstable(..)]` and `#[rustc_allowed_through_unstable_modules]`
These are a group together, because they result in a single parsed `AttributeKind`. That's because these are either directly conflicting, or modify each other. Attribute groups have state. They work in two phases:
1. Iterate over all relevant attributes. Which ones are relevant is defined by the group. This modifies the state of the group.
2. When all relevant attributes have passed, each goup can report that the result of what it saw as either an error, or a valid `AttributeKind`.
The stability group accepts `stable`, `unstable` and `rustc_allowed_through_unstable_modules`, rejects `stable` if it already saw `unstable` and the other way round, and finally creates one `AttributeKind` containing either a stability, instability, and a boolean whether it also saw `rustc_allowed_through_unstable_modules`.
#### Define certain shortcuts for common kinds of attribute groups.
For example, an attribute that can only appear a single time, or are only a single word (no parameters), or where only either the first or the last one is accepted and the rest should warn. I've already [crudely done this for attributes that can only appear a single time](https://github.com/jdonszelmann/rust/blob/458f1d026c6dfe7345829326793db0f26fda6b77/compiler/rustc_attr_parsing/src/attributes/transparency.rs#L13) but I want it to match the same names as [AttributeDuplicates](https://github.com/jdonszelmann/rust/blob/c45e831d8fcd33d656047ba97d263c4b91a00735/compiler/rustc_feature/src/builtin_attrs.rs#L105), since that's effectively what these simplifications are. That way, these policies for when an attribute is duplicate becomes a property of how we parse it, making it impossible to do it wrong. I recently found a bug with duplicate `#[inline]` where it warns that the unused attribute is the last one, while actually it's the first one that's unused. That will become impossible.
#### Define a macro to match on attributes on nodes
You can see an example of that [here](https://github.com/jdonszelmann/rust/blob/458f1d026c6dfe7345829326793db0f26fda6b77/compiler/rustc_passes/src/stability.rs#L121)
---
Note that the attribute parsers will usually run as part of ast lowering, but in select cases it is desired to run them early.
I did consider this, and this is indeed possible using the approach described here.
### Create a new attribute parser
Currently, attributes are not really parsed, I think it can more accurately be described as being decomposed. `rustc_ast::attr` has all kinds of methods to convert between somewhat abstract types like `AttrItem`, `Attribute`, `MetaItem`, `MetaItemKind`, etc, representing parts of attributes but mostly containing tokens. When attributes are parsed, they are broken up and often still stored as smaller instances of these types, while what we really want to do is get the information out and throw away the tokens when parsing.
I've created a new parser that is better suited to do this, making it harder to make errors. Even though it's already ready, I'll introduce this separately so we can talk about its benefits separately.
### introduce `rustc_attr_validation`
At this point, not much has changed as to validation. Next to `rustc_attr_parsing` and `rustc_attr_data_structures`, I intend to create `rustc_attr_validation`. This will represent all the logic *after* ast lowering, for when a `tcx` is available and we can run queries. Some of this currently happens in `rustc_passes/check_attr.rs`. However, even the fact that we will be able to exhaustively match on an enum of attributes will make mistakes harder. I intend to make more changes, such as forcing new attributes to list what kinds of targets they're valid on.
### Document these changes
Of course, I'll already have documentation on all the previous changes in code. However, I intend to write a post on the dev guide as well to make sure that in the future, people know how to use the infrastructure for attributes
### Port all attributes to this system
At this point, with all infrastructure in place, I expect a few PRs porting all attributes to be parsed in `rustc_attr_parsing`. I might ask others to help here, which is now possible when things are documented in the devguide.
### Also introduce some parsed attributes in the AST
This is an idea of @oli-obk . It might be good to also make a smaller enum of parsed attributes in `ast::Attribute`. Especially for attributes that can be discarded when lowering, or which we need or need to validate earlier on. When we validate them while parsing, we can make fewer mistakes. These can then also contain fields that aren't just tokens to support for example resolving names like with the `defines` attribute.
# Related issues
I intend to solve these systematically, as in, by rewriting how attributes are handled these should not be issues anymore.
- https://github.com/rust-lang/rust/issues/133791
- https://github.com/rust-lang/rust/issues/132464
- https://github.com/rust-lang/rust/issues/132391
- https://github.com/rust-lang/rust/issues/131787
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"jdonszelmann"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-attributes,T-compiler,C-discussion,A-ast | low | Critical |
2,565,927,279 | storybook | [Bug]: common CLI commands are missing from --help output | ### Describe the bug
The `dev`, `build` and `init` commands (at least) are missing from `--help` output.
```
> ./node_modules/.bin/storybook -V
8.3.3
```
```
> ./node_modules/.bin/storybook --help
Usage: cli <command> [options]
Options:
-V, --version output the version number
-h, --help display help for command
Commands:
add [options] <addon> Add an addon to your Storybook
remove [options] <addon> Remove an addon from your Storybook
upgrade [options] Upgrade your Storybook packages to v8.3.3
info [options] Prints debugging information about the local environment
migrate [options] [migration] Run a Storybook codemod migration on your source files
sandbox|repro [options] [filterValue] Create a sandbox from a set of possible templates
link [options] <repo-url-or-directory> Pull down a repro from a URL (or a local directory), link it, and run storybook
automigrate [options] [fixId] Check storybook for incompatibilities or migrations and apply fixes
doctor [options] Check Storybook for known problems and provide suggestions or fixes
help [command] display help for command
```
### Reproduction link
storybook --help
### Reproduction steps
1. Run `storybook --help`
### System
```
> npx storybook info
Storybook Environment Info:
System:
OS: macOS 14.6.1
CPU: (8) x64 Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz
Shell: 5.1.8 - /usr/local/bin/bash
Binaries:
Node: 20.11.1 - ~/.asdf/installs/nodejs/20.11.1/bin/node
npm: 10.2.4 - ~/.asdf/plugins/nodejs/shims/npm <----- active
Browsers:
Chrome: 129.0.6668.90
Edge: 129.0.2792.79
Safari: 17.6
npmPackages:
@storybook/addon-actions: ^8.3.3 => 8.3.3
@storybook/addon-controls: ^8.3.3 => 8.3.3
@storybook/addon-docs: ^8.3.3 => 8.3.3
@storybook/addon-essentials: ^8.3.3 => 8.3.3
@storybook/addon-interactions: ^8.3.3 => 8.3.3
@storybook/addon-links: ^8.3.3 => 8.3.3
@storybook/addon-onboarding: ^8.3.3 => 8.3.3
@storybook/addon-webpack5-compiler-swc: ^1.0.5 => 1.0.5
@storybook/blocks: ^8.3.3 => 8.3.3
@storybook/preview-api: ^8.3.3 => 8.3.3
@storybook/react: ^8.3.3 => 8.3.3
@storybook/react-webpack5: ^8.3.3 => 8.3.3
@storybook/test: ^8.3.3 => 8.3.3
@storybook/test-runner: ^0.19.1 => 0.19.1
chromatic: ^11.0.0 => 11.10.4
eslint-plugin-storybook: ^0.8.0 => 0.8.0
storybook: ^8.3.3 => 8.3.3
storybook-addon-apollo-client: ^7.3.0 => 7.3.0
```
### Additional context
_No response_ | bug,cli | low | Critical |
2,565,983,910 | PowerToys | ๅธๆๆทปๅ ๆพ็คบ็ฝ้ๅ่ฝ | ### Description of the new feature / enhancement
ๅธๆๅจ็ถๆๆ ไธญๅฏไปฅ้ๆฉๆพ็คบๆไธช็ฝ็ป้้
ๅจ็ไธไผ ๅไธ่ฝฝ็้ๅบฆใ
### Scenario when this would be used?
ๅจๆไบ่ฝฏไปถๅ
้จไธ่ฝฝๆไปถๆถ๏ผไธ่ฝฝ่ฟๅบฆไผๅกไฝ๏ผๆญคๆถๆฅ็็ถๆๆ ไธญ็ฝ็ป้้
ๅจ็ไธ่ฝฝ้ๅบฆๆฅๅคๆญ่ฝฏไปถไธ่ฝฝ่ฟ็จๆฏๅฆๆญฃๅธธๆง่กไธ่ฝฝไปปๅกใ
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,565,995,556 | godot | 4.x crash on mobile renderer on MacOS 15 (intel CPU, Radeon Pro 455) | ### Tested versions
- Reproducible 4.3 (dev2, dev3), 4.2, 4.1
-
### System information
MacOS 15, Intel CPU, AMD Radeon Pro 455
### Issue description
After creating "mobile" project, GoDot crashes when opening it. Compatibility mode works.
<details>
<summary>Report</summary>
```
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: Godot [6624]
Path: /Applications/Godot.app/Contents/MacOS/Godot
Identifier: org.godotengine.godot
Version: 4.3 (4.3)
Code Type: X86-64 (Native)
Parent Process: launchd [1]
User ID: 501
Date/Time: 2024-10-02 20:56:16.5984 -0700
OS Version: macOS 15.0 (24A335)
Report Version: 12
Bridge OS Version: 3.0 (14Y910)
Anonymous UUID: D92644F9-11C0-552A-8951-FFBC1964841A
Sleep/Wake UUID: 6165F556-7975-4FA7-931A-DE03D255B921
Time Awake Since Boot: 110000 seconds
Time Since Wake: 1811 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 6 Abort trap: 6
Terminating Process: Godot [6624]
Kernel Triage:
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff81ba88b52 __pthread_kill + 10
1 libsystem_pthread.dylib 0x7ff81bac2f85 pthread_kill + 262
2 libsystem_c.dylib 0x7ff81b9e3b19 abort + 126
3 libsystem_c.dylib 0x7ff81b9e2ddc __assert_rtn + 314
4 Metal 0x7ff826cf1dd2 MTLReportFailure.cold.1 + 41
5 Metal 0x7ff826ccba8a MTLReportFailure + 513
6 Metal 0x7ff826b4cda9 -[_MTLCommandEncoder dealloc] + 123
7 AMDMTLBronzeDriver 0x3bcc72229 -[BronzeMtlRenderCmdEncoder dealloc] + 157
8 Godot 0x109a08dd1 0x109559000 + 4914641
9 Godot 0x109a05197 0x109559000 + 4899223
10 Godot 0x109a73f56 0x109559000 + 5353302
11 Godot 0x109a72607 0x109559000 + 5346823
12 Godot 0x109a7087e 0x109559000 + 5339262
13 Godot 0x1099ba38a vkQueueSubmit + 74
14 Godot 0x10ad104c9 RenderingDeviceDriverVulkan::command_queue_execute_and_present(RenderingDeviceDriver::CommandQueueID, VectorView<RenderingDeviceDriver::SemaphoreID>, VectorView<RenderingDeviceDriver::CommandBufferID>, VectorView<RenderingDeviceDriver::SemaphoreID>, RenderingDeviceDriver::FenceID, VectorView<RenderingDeviceDriver::SwapChainID>) + 1737
15 Godot 0x10d2343c9 RenderingDevice::_execute_frame(bool) + 617
16 Godot 0x10d212921 RenderingDevice::_flush_and_stall_for_all_frames() + 161
17 Godot 0x10d229fcd RenderingDevice::screen_prepare_for_drawing(int) + 845
18 Godot 0x10d3a6240 RendererCompositorRD::blit_render_targets_to_screen(int, BlitToScreen const*, int) + 48
19 Godot 0x10d204e6e RendererViewport::draw_viewports(bool) + 5086
20 Godot 0x10d2d85b1 RenderingServerDefault::_draw(bool, double) + 321
21 Godot 0x109ea9f95 Main::iteration() + 1285
22 Godot 0x109e3ae1a OS_MacOS::run() + 154
23 Godot 0x109e69ef3 main + 387
24 dyld 0x7ff81b7352cd start + 1805
Thread 1:
0 libsystem_pthread.dylib 0x7ff81babebcc start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff81babebcc start_wqthread + 0
Thread 3:
0 libsystem_pthread.dylib 0x7ff81babebcc start_wqthread + 0
Thread 4:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10db90f1b _IP_ResolverPrivate::_thread_function(void*) + 171
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 5:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 6:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 7:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 8:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 9:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 10:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 11:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 12:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10dff923e WorkerThreadPool::_thread_function(void*) + 270
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 13:: com.apple.NSEventThread
0 libsystem_kernel.dylib 0x7ff81ba81e0e mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x7ff81ba90622 mach_msg2_internal + 84
2 libsystem_kernel.dylib 0x7ff81ba88f16 mach_msg_overwrite + 649
3 libsystem_kernel.dylib 0x7ff81ba820ff mach_msg + 19
4 CoreFoundation 0x7ff81bba8c48 __CFRunLoopServiceMachPort + 143
5 CoreFoundation 0x7ff81bba76cd __CFRunLoopRun + 1393
6 CoreFoundation 0x7ff81bba6b6c CFRunLoopRunSpecific + 536
7 AppKit 0x7ff81f630d6d _NSEventThread + 127
8 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
9 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 14:: caulk.messenger.shared:17
0 libsystem_kernel.dylib 0x7ff81ba81d8a semaphore_wait_trap + 10
1 caulk 0x7ff826da5cf1 caulk::semaphore::timed_wait(double) + 151
2 caulk 0x7ff826da5c1c caulk::concurrent::details::worker_thread::run() + 30
3 caulk 0x7ff826da595e void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*>>>(void*) + 41
4 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
5 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 15:: caulk.messenger.shared:high
0 libsystem_kernel.dylib 0x7ff81ba81d8a semaphore_wait_trap + 10
1 caulk 0x7ff826da5cf1 caulk::semaphore::timed_wait(double) + 151
2 caulk 0x7ff826da5c1c caulk::concurrent::details::worker_thread::run() + 30
3 caulk 0x7ff826da595e void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*>>>(void*) + 41
4 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
5 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 16:: caulk::deferred_logger
0 libsystem_kernel.dylib 0x7ff81ba81d8a semaphore_wait_trap + 10
1 caulk 0x7ff826da5cf1 caulk::semaphore::timed_wait(double) + 151
2 caulk 0x7ff826da5c1c caulk::concurrent::details::worker_thread::run() + 30
3 caulk 0x7ff826da595e void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*>>>(void*) + 41
4 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
5 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 17:: com.apple.audio.IOThread.client
0 libsystem_kernel.dylib 0x7ff81ba81d96 semaphore_wait_signal_trap + 10
1 caulk 0x7ff826dbf315 caulk::mach::semaphore::wait_signal_or_error(caulk::mach::semaphore&) + 23
2 CoreAudio 0x7ff81e43b861 HALC_ProxyIOContext::IOWorkLoop() + 5515
3 CoreAudio 0x7ff81e439ad8 invocation function for block in HALC_ProxyIOContext::HALC_ProxyIOContext(unsigned int, unsigned int) + 148
4 CoreAudio 0x7ff81e5e7041 HALC_IOThread::Entry(void*) + 73
5 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
6 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 18:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10a9a16ac 0x109559000 + 21268140
4 Godot 0x10a9a151d 0x109559000 + 21267741
5 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
6 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 19:
0 libsystem_kernel.dylib 0x7ff81ba84876 __semwait_signal + 10
1 libsystem_c.dylib 0x7ff81b972cf1 nanosleep + 199
2 Godot 0x10ace1fcb OS_Unix::delay_usec(unsigned int) const + 59
3 Godot 0x10ae55142 EditorExportPlatformAndroid::_check_for_changes_poll_thread(void*) + 8450
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 20:
0 libsystem_kernel.dylib 0x7ff81ba84876 __semwait_signal + 10
1 libsystem_c.dylib 0x7ff81b972cf1 nanosleep + 199
2 Godot 0x10ace1fcb OS_Unix::delay_usec(unsigned int) const + 59
3 Godot 0x10aeb3dc2 EditorExportPlatformIOS::_check_for_changes_poll_thread(void*) + 11298
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 21:
0 libsystem_kernel.dylib 0x7ff81ba849aa __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81bac37a8 _pthread_cond_wait + 1193
2 libc++.1.dylib 0x7ff81ba07a04 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
3 Godot 0x10bc4925b TilesEditorUtils::_thread() + 315
4 Godot 0x10da575bb Thread::callback(unsigned long long, Thread::Settings const&, void (*)(void*), void*) + 91
5 Godot 0x10da57924 0x109559000 + 72345892
6 libsystem_pthread.dylib 0x7ff81bac3253 _pthread_start + 99
7 libsystem_pthread.dylib 0x7ff81babebef thread_start + 15
Thread 22:
0 libsystem_pthread.dylib 0x7ff81babebcc start_wqthread + 0
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000000 rbx: 0x0000000000000006 rcx: 0x00007ff7b69a3448 rdx: 0x0000000000000000
rdi: 0x0000000000000103 rsi: 0x0000000000000006 rbp: 0x00007ff7b69a3470 rsp: 0x00007ff7b69a3448
r8: 0x000000000000002e r9: 0x0000000000000086 r10: 0x0000000000000000 r11: 0x0000000000000246
r12: 0x0000000429cc5000 r13: 0xffffffffffffffff r14: 0x0000000000000103 r15: 0x0000000000000016
rip: 0x00007ff81ba88b52 rfl: 0x0000000000000246 cr2: 0x0000000000000000
Logical CPU: 0
Error Code: 0x02000148
Trap Number: 133
Binary Images:
0x109559000 - 0x110002fff org.godotengine.godot (4.3) <27f56adc-28ac-3d00-bc8a-2a457e82a447> /Applications/Godot.app/Contents/MacOS/Godot
0x111331000 - 0x11133dfff com.apple.CoreWiFi (1.0) <c356ba94-c112-3326-9011-74fc0bb4dfde> /System/Library/PrivateFrameworks/CoreWiFi.framework/Versions/A/CoreWiFi
0x1115ac000 - 0x111747fff CoreWiFiOld.dylib (*) <d5ce0007-36ac-37a1-a5d7-07ad7365e60a> /System/Library/PrivateFrameworks/CoreWiFi.framework/Versions/A/CoreWiFiOld.dylib
0x111310000 - 0x111311fff com.apple.IO80211 (1.0) <c0791260-3db4-30e8-962c-1d7a90bbe2d5> /System/Library/PrivateFrameworks/IO80211.framework/Versions/A/IO80211
0x11131b000 - 0x11131bfff com.apple.WiFiPeerToPeer (652.55.0) <6de18c87-b584-3fba-b9f2-5439ee94f3f7> /System/Library/PrivateFrameworks/WiFiPeerToPeer.framework/Versions/A/WiFiPeerToPeer
0x17aa06000 - 0x17aa59fff IO80211Old.dylib (*) <d5ce0007-f5f4-3ab8-a256-05df5202823b> /System/Library/PrivateFrameworks/IO80211.framework/Versions/A/IO80211Old.dylib
0x1ea573000 - 0x1ea59afff WiFiPeerToPeerOld.dylib (*) <d5ce0007-9684-3a09-bf78-fb3a4f8bb7b1> /System/Library/PrivateFrameworks/WiFiPeerToPeer.framework/Versions/A/WiFiPeerToPeerOld.dylib
0x11135e000 - 0x111360fff com.apple.CoreAuthentication.SharedUtils (1.0) <79df8b5d-5ed4-370d-929f-4ba9757d729d> /System/Library/Frameworks/LocalAuthentication.framework/Support/SharedUtils.framework/Versions/A/SharedUtils
0x2615dd000 - 0x26160afff SharedUtilsOld.dylib (*) <d5ce0007-348f-3aeb-8b50-807e45ccc880> /System/Library/Frameworks/LocalAuthentication.framework/Support/SharedUtils.framework/Versions/A/SharedUtilsOld.dylib
0x111325000 - 0x111325fff com.apple.framework.CoreWLAN (16.0) <cf40012c-37ff-356e-863f-24c150978974> /System/Library/Frameworks/CoreWLAN.framework/Versions/A/CoreWLAN
0x2cbf96000 - 0x2cc010fff CoreWLANOld.dylib (*) <d5ce0007-0117-3ddb-beb8-5886a5f8b758> /System/Library/Frameworks/CoreWLAN.framework/Versions/A/CoreWLANOld.dylib
0x34bf64000 - 0x34bf70fff libobjc-trampolines.dylib (*) <a732c7f4-a3c1-39e5-9fc3-5e1deb73a584> /usr/lib/libobjc-trampolines.dylib
0x34cd01000 - 0x34d240fff com.apple.driver.AppleIntelSKLGraphicsMTLDriver (18.8.4) <d5cf0007-2d61-3ea5-8a1d-d9a27a05d402> /System/Library/Extensions/AppleIntelSKLGraphicsMTLDriver.bundle/Contents/MacOS/AppleIntelSKLGraphicsMTLDriver
0x34bf45000 - 0x34bf47fff impostor.dylib (*) <fa8fefc5-c515-3c33-9cb5-10763f8e4fe9> /System/Library/Extensions/AppleIntelSKLGraphicsMTLDriver.bundle/Contents/MacOS/impostor.dylib
0x3bcbd3000 - 0x3bce2efff com.apple.AMDMTLBronzeDriver (4.8.101) <d5ce0007-6797-39a6-a03b-96e9e9b5c574> /System/Library/Extensions/AMDMTLBronzeDriver.bundle/Contents/MacOS/AMDMTLBronzeDriver
0x34bf52000 - 0x34bf54fff impostor.dylib (*) <d35343c1-da83-3c5d-a69f-bc8d1afb8cad> /System/Library/Extensions/AMDMTLBronzeDriver.bundle/Contents/MacOS/impostor.dylib
0x42123f000 - 0x421246fff com.apple.GameController.KeyboardAndMouseSupport (*) <49b98cfe-2ce0-3257-a2da-e55bd3c7010b> /System/Library/Frameworks/GameController.framework/Versions/A/Resources/KeyboardAndMouseSupport.bundle/Contents/MacOS/KeyboardAndMouseSupport
0x42777d000 - 0x4278c1fff com.apple.audio.units.Components (1.14) <4a933320-cc7d-3903-9407-a83122184e76> /System/Library/Components/CoreAudio.component/Contents/MacOS/CoreAudio
0x4274d8000 - 0x4274dcfff com.apple.audio.AppleHDAHALPlugIn (600.2) <f4bce0db-6511-31df-8726-442eeeffb04c> /System/Library/Extensions/AppleHDA.kext/Contents/PlugIns/AppleHDAHALPlugIn.bundle/Contents/MacOS/AppleHDAHALPlugIn
0x429dbe000 - 0x429e00fff com.apple.cmio.DAL.VDC-4 (810.0) <1ac83e82-d960-399f-bc3a-0c5d57b63e60> /System/Library/Frameworks/CoreMediaIO.framework/Versions/A/Resources/VDC.plugin/Contents/MacOS/VDC
0x7ff81ba81000 - 0x7ff81babcfff libsystem_kernel.dylib (*) <a0aee5ca-4298-3070-82f9-ea72229f36e5> /usr/lib/system/libsystem_kernel.dylib
0x7ff81babd000 - 0x7ff81bac8fff libsystem_pthread.dylib (*) <c0db9cf9-86ec-31d4-a557-2c07945fd8f2> /usr/lib/system/libsystem_pthread.dylib
0x7ff81b963000 - 0x7ff81b9ebff7 libsystem_c.dylib (*) <2d4e63ef-e31c-3cc1-94ec-2b7e28b9782f> /usr/lib/system/libsystem_c.dylib
0x7ff826b29000 - 0x7ff826da3ff7 com.apple.Metal (366.22) <e40222b9-271d-34e3-890e-8c8c555a81a9> /System/Library/Frameworks/Metal.framework/Versions/A/Metal
0x7ff81b72f000 - 0x7ff81b7bb32f dyld (*) <e6056c94-fc2d-3517-b1e1-46d8eb58a10e> /usr/lib/dyld
0x0 - 0xffffffffffffffff ??? (*) <00000000-0000-0000-0000-000000000000> ???
0x7ff81b9ec000 - 0x7ff81ba68ffb libc++.1.dylib (*) <e35e82f9-4037-35da-99f0-4d09be1d9721> /usr/lib/libc++.1.dylib
0x7ff81bb2c000 - 0x7ff81bfcbff2 com.apple.CoreFoundation (6.9) <a7324227-eb88-3393-8efe-10a9f3d28064> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x7ff81f49b000 - 0x7ff820935ffd com.apple.AppKit (6.9) <55408426-52c7-3b83-9097-0a12aa2620e1> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit
0x7ff826da4000 - 0x7ff826dc7fff com.apple.audio.caulk (1.0) <09fe8d44-b9ca-35bf-ab5b-ddb42f62fee0> /System/Library/PrivateFrameworks/caulk.framework/Versions/A/caulk
0x7ff81e254000 - 0x7ff81e992fff com.apple.audio.CoreAudio (5.0) <d3a27106-6ae2-354d-9402-56d6e05f8589> /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=903.9M resident=0K(0%) swapped_out_or_unallocated=903.9M(100%)
Writable regions: Total=14.7G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=14.7G(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Accelerate framework 128K 1
Activity Tracing 256K 1
CG image 168K 9
ColorSync 232K 26
CoreAnimation 332K 39
CoreGraphics 12K 2
CoreGraphics (reserved) 8K 1 reserved VM address space (unallocated)
CoreUI image data 1360K 10
Foundation 16K 1
Kernel Alloc Once 8K 1
MALLOC 2.6G 748
MALLOC guard page 48K 12
STACK GUARD 56.1M 23
Stack 19.2M 24
VM_ALLOCATE 96.1M 20
VM_ALLOCATE (reserved) 11.9G 18 reserved VM address space (unallocated)
__CTF 824 1
__DATA 21.8M 625
__DATA_CONST 59.0M 622
__DATA_DIRTY 1476K 211
__FONT_DATA 2352 1
__LINKEDIT 199.3M 22
__OBJC_RO 76.1M 1
__OBJC_RW 2354K 2
__TEXT 704.6M 646
__TPRO_CONST 272K 2
dsce.got 124K 1
dyld private memory 1408K 3
mapped file 197.8M 45
owned unmapped memory 424K 1
shared memory 1408K 22
=========== ======= =======
TOTAL 16.0G 3141
TOTAL, minus reserved VM space 4.1G 3141
-----------
Full Report
-----------
{"app_name":"Godot","timestamp":"2024-10-02 20:56:33.00 -0700","app_version":"4.3","slice_uuid":"27f56adc-28ac-3d00-bc8a-2a457e82a447","build_version":"4.3","platform":1,"bundleID":"org.godotengine.godot","share_with_app_devs":1,"is_first_party":0,"bug_type":"309","os_version":"macOS 15.0 (24A335)","roots_installed":0,"name":"Godot","incident_id":"F976F85F-975D-4541-9CFF-B5F31324A1A4"}
{
"uptime" : 110000,
"procRole" : "Background",
"version" : 2,
"userID" : 501,
"deployVersion" : 210,
"modelCode" : "MacBookPro13,3",
"coalitionID" : 21741,
"osVersion" : {
"train" : "macOS 15.0",
"build" : "24A335",
"releaseType" : "User"
},
"captureTime" : "2024-10-02 20:56:16.5984 -0700",
"codeSigningMonitor" : 0,
"incident" : "F976F85F-975D-4541-9CFF-B5F31324A1A4",
"pid" : 6624,
"cpuType" : "X86-64",
"roots_installed" : 0,
"bug_type" : "309",
"procLaunch" : "2024-10-02 20:55:49.4387 -0700",
"procStartAbsTime" : 111254245675013,
"procExitAbsTime" : 111281315835661,
"procName" : "Godot",
"procPath" : "\/Applications\/Godot.app\/Contents\/MacOS\/Godot",
"bundleInfo" : {"CFBundleShortVersionString":"4.3","CFBundleVersion":"4.3","CFBundleIdentifier":"org.godotengine.godot"},
"storeInfo" : {"deviceIdentifierForVendor":"60FDEC9C-C65E-54E7-ABD0-6BDF8A56D667","thirdParty":true},
"parentProc" : "launchd",
"parentPid" : 1,
"coalitionName" : "org.godotengine.godot",
"crashReporterKey" : "D92644F9-11C0-552A-8951-FFBC1964841A",
"codeSigningID" : "org.godotengine.godot",
"codeSigningTeamID" : "6K46PWY5DM",
"codeSigningFlags" : 570490881,
"codeSigningValidationCategory" : 6,
"codeSigningTrustLevel" : 4294967295,
"bootSessionUUID" : "49C3DABC-5405-4E3E-8FFB-1F47FB941041",
"wakeTime" : 1811,
"bridgeVersion" : {"build":"14Y910","train":"3.0"},
"sleepWakeUUID" : "6165F556-7975-4FA7-931A-DE03D255B921",
"sip" : "enabled",
"exception" : {"codes":"0x0000000000000000, 0x0000000000000000","rawCodes":[0,0],"type":"EXC_CRASH","signal":"SIGABRT"},
"termination" : {"flags":0,"code":6,"namespace":"SIGNAL","indicator":"Abort trap: 6","byProc":"Godot","byPid":6624},
"ktriageinfo" : "VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter\nVM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter\nVM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter\nVM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter\nVM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter\n",
"extMods" : {"caller":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"system":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"targeted":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"warnings":0},
"faultingThread" : 0,
"usedImages" : [
{
"source" : "P",
"arch" : "x86_64",
"base" : 4451569664,
"CFBundleShortVersionString" : "4.3",
"CFBundleIdentifier" : "org.godotengine.godot",
"size" : 111845376,
"uuid" : "27f56adc-28ac-3d00-bc8a-2a457e82a447",
"path" : "\/Applications\/Godot.app\/Contents\/MacOS\/Godot",
"name" : "Godot",
"CFBundleVersion" : "4.3"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4583526400,
"CFBundleShortVersionString" : "1.0",
"CFBundleIdentifier" : "com.apple.CoreWiFi",
"size" : 53248,
"uuid" : "c356ba94-c112-3326-9011-74fc0bb4dfde",
"path" : "\/System\/Library\/PrivateFrameworks\/CoreWiFi.framework\/Versions\/A\/CoreWiFi",
"name" : "CoreWiFi",
"CFBundleVersion" : "802.64"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4586127360,
"size" : 1687552,
"uuid" : "d5ce0007-36ac-37a1-a5d7-07ad7365e60a",
"path" : "\/System\/Library\/PrivateFrameworks\/CoreWiFi.framework\/Versions\/A\/CoreWiFiOld.dylib",
"name" : "CoreWiFiOld.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4583391232,
"CFBundleShortVersionString" : "1.0",
"CFBundleIdentifier" : "com.apple.IO80211",
"size" : 8192,
"uuid" : "c0791260-3db4-30e8-962c-1d7a90bbe2d5",
"path" : "\/System\/Library\/PrivateFrameworks\/IO80211.framework\/Versions\/A\/IO80211",
"name" : "IO80211",
"CFBundleVersion" : "1"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4583436288,
"CFBundleShortVersionString" : "652.55.0",
"CFBundleIdentifier" : "com.apple.WiFiPeerToPeer",
"size" : 4096,
"uuid" : "6de18c87-b584-3fba-b9f2-5439ee94f3f7",
"path" : "\/System\/Library\/PrivateFrameworks\/WiFiPeerToPeer.framework\/Versions\/A\/WiFiPeerToPeer",
"name" : "WiFiPeerToPeer",
"CFBundleVersion" : "652.55"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 6352297984,
"size" : 344064,
"uuid" : "d5ce0007-f5f4-3ab8-a256-05df5202823b",
"path" : "\/System\/Library\/PrivateFrameworks\/IO80211.framework\/Versions\/A\/IO80211Old.dylib",
"name" : "IO80211Old.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 8226549760,
"size" : 163840,
"uuid" : "d5ce0007-9684-3a09-bf78-fb3a4f8bb7b1",
"path" : "\/System\/Library\/PrivateFrameworks\/WiFiPeerToPeer.framework\/Versions\/A\/WiFiPeerToPeerOld.dylib",
"name" : "WiFiPeerToPeerOld.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4583710720,
"CFBundleShortVersionString" : "1.0",
"CFBundleIdentifier" : "com.apple.CoreAuthentication.SharedUtils",
"size" : 12288,
"uuid" : "79df8b5d-5ed4-370d-929f-4ba9757d729d",
"path" : "\/System\/Library\/Frameworks\/LocalAuthentication.framework\/Support\/SharedUtils.framework\/Versions\/A\/SharedUtils",
"name" : "SharedUtils",
"CFBundleVersion" : "1656.0.99"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 10223472640,
"size" : 188416,
"uuid" : "d5ce0007-348f-3aeb-8b50-807e45ccc880",
"path" : "\/System\/Library\/Frameworks\/LocalAuthentication.framework\/Support\/SharedUtils.framework\/Versions\/A\/SharedUtilsOld.dylib",
"name" : "SharedUtilsOld.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4583477248,
"CFBundleShortVersionString" : "16.0",
"CFBundleIdentifier" : "com.apple.framework.CoreWLAN",
"size" : 4096,
"uuid" : "cf40012c-37ff-356e-863f-24c150978974",
"path" : "\/System\/Library\/Frameworks\/CoreWLAN.framework\/Versions\/A\/CoreWLAN",
"name" : "CoreWLAN",
"CFBundleVersion" : "1657"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 12012052480,
"size" : 503808,
"uuid" : "d5ce0007-0117-3ddb-beb8-5886a5f8b758",
"path" : "\/System\/Library\/Frameworks\/CoreWLAN.framework\/Versions\/A\/CoreWLANOld.dylib",
"name" : "CoreWLANOld.dylib"
},
{
"source" : "P",
"arch" : "x86_64h",
"base" : 14159331328,
"size" : 53248,
"uuid" : "a732c7f4-a3c1-39e5-9fc3-5e1deb73a584",
"path" : "\/usr\/lib\/libobjc-trampolines.dylib",
"name" : "libobjc-trampolines.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 14173605888,
"CFBundleShortVersionString" : "18.8.4",
"CFBundleIdentifier" : "com.apple.driver.AppleIntelSKLGraphicsMTLDriver",
"size" : 5505024,
"uuid" : "d5cf0007-2d61-3ea5-8a1d-d9a27a05d402",
"path" : "\/System\/Library\/Extensions\/AppleIntelSKLGraphicsMTLDriver.bundle\/Contents\/MacOS\/AppleIntelSKLGraphicsMTLDriver",
"name" : "AppleIntelSKLGraphicsMTLDriver",
"CFBundleVersion" : "18.0.8"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 14159204352,
"size" : 12288,
"uuid" : "fa8fefc5-c515-3c33-9cb5-10763f8e4fe9",
"path" : "\/System\/Library\/Extensions\/AppleIntelSKLGraphicsMTLDriver.bundle\/Contents\/MacOS\/impostor.dylib",
"name" : "impostor.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 16051417088,
"CFBundleShortVersionString" : "4.8.101",
"CFBundleIdentifier" : "com.apple.AMDMTLBronzeDriver",
"size" : 2473984,
"uuid" : "d5ce0007-6797-39a6-a03b-96e9e9b5c574",
"path" : "\/System\/Library\/Extensions\/AMDMTLBronzeDriver.bundle\/Contents\/MacOS\/AMDMTLBronzeDriver",
"name" : "AMDMTLBronzeDriver",
"CFBundleVersion" : "4.0.8"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 14159257600,
"size" : 12288,
"uuid" : "d35343c1-da83-3c5d-a69f-bc8d1afb8cad",
"path" : "\/System\/Library\/Extensions\/AMDMTLBronzeDriver.bundle\/Contents\/MacOS\/impostor.dylib",
"name" : "impostor.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 17735872512,
"CFBundleIdentifier" : "com.apple.GameController.KeyboardAndMouseSupport",
"size" : 32768,
"uuid" : "49b98cfe-2ce0-3257-a2da-e55bd3c7010b",
"path" : "\/System\/Library\/Frameworks\/GameController.framework\/Versions\/A\/Resources\/KeyboardAndMouseSupport.bundle\/Contents\/MacOS\/KeyboardAndMouseSupport",
"name" : "KeyboardAndMouseSupport",
"CFBundleVersion" : "12.0.37"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 17842032640,
"CFBundleShortVersionString" : "1.14",
"CFBundleIdentifier" : "com.apple.audio.units.Components",
"size" : 1331200,
"uuid" : "4a933320-cc7d-3903-9407-a83122184e76",
"path" : "\/System\/Library\/Components\/CoreAudio.component\/Contents\/MacOS\/CoreAudio",
"name" : "CoreAudio",
"CFBundleVersion" : "1.14"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 17839259648,
"CFBundleShortVersionString" : "600.2",
"CFBundleIdentifier" : "com.apple.audio.AppleHDAHALPlugIn",
"size" : 20480,
"uuid" : "f4bce0db-6511-31df-8726-442eeeffb04c",
"path" : "\/System\/Library\/Extensions\/AppleHDA.kext\/Contents\/PlugIns\/AppleHDAHALPlugIn.bundle\/Contents\/MacOS\/AppleHDAHALPlugIn",
"name" : "AppleHDAHALPlugIn",
"CFBundleVersion" : "600.2"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 17882144768,
"CFBundleShortVersionString" : "810.0",
"CFBundleIdentifier" : "com.apple.cmio.DAL.VDC-4",
"size" : 274432,
"uuid" : "1ac83e82-d960-399f-bc3a-0c5d57b63e60",
"path" : "\/System\/Library\/Frameworks\/CoreMediaIO.framework\/Versions\/A\/Resources\/VDC.plugin\/Contents\/MacOS\/VDC",
"name" : "VDC",
"CFBundleVersion" : "449"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703592615936,
"size" : 245760,
"uuid" : "a0aee5ca-4298-3070-82f9-ea72229f36e5",
"path" : "\/usr\/lib\/system\/libsystem_kernel.dylib",
"name" : "libsystem_kernel.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703592861696,
"size" : 49152,
"uuid" : "c0db9cf9-86ec-31d4-a557-2c07945fd8f2",
"path" : "\/usr\/lib\/system\/libsystem_pthread.dylib",
"name" : "libsystem_pthread.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703591444480,
"size" : 561144,
"uuid" : "2d4e63ef-e31c-3cc1-94ec-2b7e28b9782f",
"path" : "\/usr\/lib\/system\/libsystem_c.dylib",
"name" : "libsystem_c.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703777853440,
"CFBundleShortVersionString" : "366.22",
"CFBundleIdentifier" : "com.apple.Metal",
"size" : 2600952,
"uuid" : "e40222b9-271d-34e3-890e-8c8c555a81a9",
"path" : "\/System\/Library\/Frameworks\/Metal.framework\/Versions\/A\/Metal",
"name" : "Metal",
"CFBundleVersion" : "366.22"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703589134336,
"size" : 574256,
"uuid" : "e6056c94-fc2d-3517-b1e1-46d8eb58a10e",
"path" : "\/usr\/lib\/dyld",
"name" : "dyld"
},
{
"size" : 0,
"source" : "A",
"base" : 0,
"uuid" : "00000000-0000-0000-0000-000000000000"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703592005632,
"size" : 511996,
"uuid" : "e35e82f9-4037-35da-99f0-4d09be1d9721",
"path" : "\/usr\/lib\/libc++.1.dylib",
"name" : "libc++.1.dylib"
},
{
"source" : "P",
"arch" : "x86_64h",
"base" : 140703593316352,
"CFBundleShortVersionString" : "6.9",
"CFBundleIdentifier" : "com.apple.CoreFoundation",
"size" : 4849651,
"uuid" : "a7324227-eb88-3393-8efe-10a9f3d28064",
"path" : "\/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation",
"name" : "CoreFoundation",
"CFBundleVersion" : "3038.1.402"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703653539840,
"CFBundleShortVersionString" : "6.9",
"CFBundleIdentifier" : "com.apple.AppKit",
"size" : 21606398,
"uuid" : "55408426-52c7-3b83-9097-0a12aa2620e1",
"path" : "\/System\/Library\/Frameworks\/AppKit.framework\/Versions\/C\/AppKit",
"name" : "AppKit",
"CFBundleVersion" : "2566"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703780454400,
"CFBundleShortVersionString" : "1.0",
"CFBundleIdentifier" : "com.apple.audio.caulk",
"size" : 147456,
"uuid" : "09fe8d44-b9ca-35bf-ab5b-ddb42f62fee0",
"path" : "\/System\/Library\/PrivateFrameworks\/caulk.framework\/Versions\/A\/caulk",
"name" : "caulk"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703634374656,
"CFBundleShortVersionString" : "5.0",
"CFBundleIdentifier" : "com.apple.audio.CoreAudio",
"size" : 7598080,
"uuid" : "d3a27106-6ae2-354d-9402-56d6e05f8589",
"path" : "\/System\/Library\/Frameworks\/CoreAudio.framework\/Versions\/A\/CoreAudio",
"name" : "CoreAudio",
"CFBundleVersion" : "5.0"
}
],
"sharedCache" : {
"base" : 140703588384768,
"size" : 25769803776,
"uuid" : "78aaaa52-08a2-311d-a934-9c187f804833"
},
"vmSummary" : "ReadOnly portion of Libraries: Total=903.9M resident=0K(0%) swapped_out_or_unallocated=903.9M(100%)\nWritable regions: Total=14.7G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=14.7G(100%)\n\n VIRTUAL REGION \nREGION TYPE SIZE COUNT (non-coalesced) \n=========== ======= ======= \nAccelerate framework 128K 1 \nActivity Tracing 256K 1 \nCG image 168K 9 \nColorSync 232K 26 \nCoreAnimation 332K 39 \nCoreGraphics 12K 2 \nCoreGraphics (reserved) 8K 1 reserved VM address space (unallocated)\nCoreUI image data 1360K 10 \nFoundation 16K 1 \nKernel Alloc Once 8K 1 \nMALLOC 2.6G 748 \nMALLOC guard page 48K 12 \nSTACK GUARD 56.1M 23 \nStack 19.2M 24 \nVM_ALLOCATE 96.1M 20 \nVM_ALLOCATE (reserved) 11.9G 18 reserved VM address space (unallocated)\n__CTF 824 1 \n__DATA 21.8M 625 \n__DATA_CONST 59.0M 622 \n__DATA_DIRTY 1476K 211 \n__FONT_DATA 2352 1 \n__LINKEDIT 199.3M 22 \n__OBJC_RO 76.1M 1 \n__OBJC_RW 2354K 2 \n__TEXT 704.6M 646 \n__TPRO_CONST 272K 2 \ndsce.got 124K 1 \ndyld private memory 1408K 3 \nmapped file 197.8M 45 \nowned unmapped memory 424K 1 \nshared memory 1408K 22 \n=========== ======= ======= \nTOTAL 16.0G 3141 \nTOTAL, minus reserved VM space 4.1G 3141 \n",
"legacyInfo" : {
"threadTriggered" : {
"queue" : "com.apple.main-thread"
}
},
"logWritingSignature" : "636946447f4dece4e7b7da5cad4239a8443648ce",
"trialInfo" : {
"rollouts" : [
{
"rolloutId" : "654d8c0661e7447155256fcd",
"factorPackIds" : {
"SIRI_TEXT_TO_SPEECH" : "66c3e0ec54da772bbdc18016"
},
"deploymentId" : 240000169
}
],
"experiments" : [
]
}
}
```
</details>
### Steps to reproduce
Create new "mobile" project or open existing
### Minimal reproduction project (MRP)
```
; Engine configuration file.
; It's best edited using the editor UI and not directly,
; since the parameters that go here are not all obvious.
;
; Format:
; [section] ; section goes between []
; param=value ; assign values to parameters
config_version=5
[application]
config/name="mobile"
config/features=PackedStringArray("4.4", "Mobile")
config/icon="res://icon.svg"
[rendering]
renderer/rendering_method="mobile"
``` | bug,platform:macos,topic:rendering,needs testing,crash | low | Critical |
2,566,032,440 | rust | Compile time regression with the new trait solver and diesel | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code: https://github.com/diesel-rs/diesel/commit/381be195688db339fe2927e49bc818ab86754dd9 with both `RUSTFLAGS=-Znext-solver=globally` set and not set
I expected to see this happen: Code compiles similar fast with both combinations
Instead, this happened: `RUSTFLAGS=-Znext-solver=globally` results in the code compiling takes over 10 times as long. For me the compiler takes ~10s to compile diesel from scratch with the old trait solver. The same operation takes 1:53 min, so over 10 times of what the old trait solver needed.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (9ff5fc4ff 2024-10-03)
binary: rustc
commit-hash: 9ff5fc4ffbbe1e911527aa054e789b05ae55ffcc
commit-date: 2024-10-03
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
@rustbot label +I-compiletime +WG-trait-system-refactor | I-compiletime,T-compiler,C-bug,T-types,WG-trait-system-refactor | low | Critical |
2,566,099,996 | deno | node:http2 Http2Stream.close does not send RST_STREAM | https://github.com/denoland/deno/blob/7b509e492ed6c7ace0f3860c3f4e8e7be3452fda/ext/node/polyfills/http2.ts#L741-L744
deno 2.0.0-rc.10
```ts
let server = (await import('node:http2')).createServer(); server.on('stream', s => s.close(1)); server.listen(8888);
```
https://nodejs.org/api/http2.html#http2streamclosecode-callback
> Closes the Http2Stream instance by sending an RST_STREAM frame to the connected HTTP/2 peer.
| bug,node compat | low | Minor |
2,566,103,070 | godot | 4.4 Dev 3. Inbuilt functions that return dictionaries do not return a correct fully typed dictionary. | ### Tested versions
Reproducible in v4.4.dev3.official [f4af8201b]
### System information
Godot v4.4.dev3 - Windows 10.0.22631 - Single-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (NVIDIA; 32.0.15.6109) - 13th Gen Intel(R) Core(TM) i7-13700K (24 threads)
### Issue description
I am sure maintainers may already be aware but currently inbuilt functions such as `get_datetime_dict_from_system` does not return a correctly typed dictionary, but instead returns `Dictionary[Variant, Variant]` when I believe this one in particular should be `Dictionary[String, Variant]`.
### Steps to reproduce
Try and do something like:
`var time: Dictionary[String, Variant] = Time.get_datetime_dict_from_system()`.
### Minimal reproduction project (MRP)
N/A | enhancement,discussion,topic:core,topic:gdscript,breaks compat | low | Major |
2,566,211,807 | react-native | PanResponder onMoveShouldSetPanResponder delta not reset | ### Description
Delta is not reset between gestures when it returned `false`
```jsx
PanResponder.create({
onMoveShouldSetPanResponder(e, gesture) {
const {dx, dy} = gesture
return false
},
})
```
That seems to be because none of the other callbacks are called and the inner state initial position isn't reset.
I'm currently fixing this with
```js
function PanProvider({children}) {
const capturedPositionRef = useRef(null);
return <View onMoveShouldSetResponderCapture={({nativeEvent:{pageX, pageY}}) => {
// don't do anything, just capture the initial position
capturedPositionRef.current = {pageX, pageY}
return false
}}>
<PanContext.Provider value={capturedPositionRef}>{children}</PanContext.Provider>
</View>
}
```
```jsx
export default function usePanResponderLock(direction: 'horizontal' | 'vertical') {
// uses the provider initial position
const captureGestureRef = usePanResponder();
return useRef(
PanResponder.create({
onMoveShouldSetPanResponder({ nativeEvent: { pageX, pageY } }) {
if (captureGestureRef.current) {
const { pageX: startX, pageY: startY } = captureGestureRef.current;
const dx = Math.abs(pageX - startX);
const dy = Math.abs(pageY - startY);
const dragging = dx > 2 || dy > 2;
if (!dragging) return false;
return direction === 'horizontal' ? dx > dy : dy > dx;
}
return false;
},
onPanResponderTerminationRequest() {
return true;
},
}),
).current;
}
```
This is working well, but the delta should be fixed in RN
### Steps to reproduce
Try looking for a horizontal gesture with `dx > dy`:
- from the top left corner, make a **vertical** gesture => delta's are correct, view is not a responder.
- make a new **horizontal** gesture from bottom left corner => delta y is the distance with the previous touch and delta x too => `dy > dx` even though it's a horizontal gesture
### React Native Version
0.73.8
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (12) arm64 Apple M2 Max
Memory: 277.72 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.9.0
path: ~/.nvm/versions/node/v20.9.0/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 10.1.0
path: ~/.nvm/versions/node/v20.9.0/bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.13.0
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.4
- iOS 17.4
- macOS 14.4
- tvOS 17.4
- visionOS 1.1
- watchOS 10.4
Android SDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10811636
Xcode:
version: 15.3/15E204a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 3.3.0
path: /Users/gregoirevda/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
No stacktrace error.
```
### Reproducer
https://snack.expo.dev/@gregoirevda/bad-orange-popsicle
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,API: PanResponder,Newer Patch Available,Needs: Attention | low | Critical |
2,566,242,682 | transformers | Automatic dynamic batch size selection for DataCollatorWithFlattening | ### Feature request
Add a custom (batch index) sampler to automatically determine batch size to a fixed target number of tokens.
### Motivation
I'm keen to try out DataCollatorWithFlattening but unsure about how to set batch size, since no padding will be added so the total number of tokens is dynamic.
Im also uncertain whether fixing the total number of tokens is itself optimal...Does optimal memory allocation require accounting for the amount of attention masking that will be applied to the batch?
Is there any recommendation on how to handle this currently?
(Edit: seems like near-optimal solution for map-style datasets is provided by https://github.com/imoneoi/multipack_sampler/tree/master, which presumably just tries to ensure all batches are as full as possible given some max number of tokens. It would be nice to support similar functionality for Iterable Datasets - not optimal packing, but adjusting batch size to adapt to number of tokens in examples should be possible)
### Your contribution
May be able to try to implement something for iterable datasets if this is possible. | Usage,Feature request | low | Major |
2,566,249,338 | go | x/text/encoding/charmap: support for code page `1125` aka `cp866u` | ### Proposal Details
๐ Hey team,
I'm encountering issues with the [charmap](https://cs.opensource.google/go/x/text/+/refs/tags/v0.18.0:encoding/charmap/) package when attempting to decode/encode using code page [cp1125](https://github.com/unicode-org/icu-data/blob/main/charset/data/ucm/glibc-CP1125-2.3.3.ucm). This encoding is missing. Given that some of our Ukrainian projects rely on CP1125, especially when interacting with legacy systems, addressing this would be greatly beneficial.
[IBM CP1125](https://public.dhe.ibm.com/software/globalization/gcoc/attachments/CP01125.txt) (aka [cp866u](https://public.dhe.ibm.com/software/globalization/gcoc/attachments/CP01125.pdf)) is Ukrainian government standard (RST 2018-91) for DOS, based on common "alternative" encoding, but different from cp866 in [0xF2-0xF9](https://en.wikipedia.org/wiki/Code_page_866#Ukrainian_and_Belarusian_variants). It is known by GNU iconv as CP1125.
- https://public.dhe.ibm.com/software/globalization/gcoc/attachments/CP01125.pdf
- https://public.dhe.ibm.com/software/globalization/gcoc/attachments/CP01125.txt
- https://github.com/unicode-org/icu/blob/main/icu4c/source/data/mappings/ibm-1125_P100-1997.ucm
- https://en.wikipedia.org/wiki/Code_page_866#Ukrainian_and_Belarusian_variants
- https://segfault.kiev.ua/cyrillic-encodings/#ruscii

https://github.com/microsoft/vscode/issues/230438
I'd be happy to provide more details or assist with testing if needed.
Thank you for considering this request.
| NeedsFix | low | Minor |
2,566,285,102 | opencv | Windows get camera cv2.CAP_PROP_FPS incorrect value for low fps. | ### System Information
Self built Opencv 4.10 for windows with python.
### Detailed description
OpenCV for Windows reports incorrect FPS if Camera was read before and low FPS was set.
### Steps to reproduce
```
from datetime import datetime,timedelta
import cv2
cap = cv2.VideoCapture(0)
cap.grab() # Remove this line to get correct real FPS
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
cap.set(cv2.CAP_PROP_FPS, 2) # Set low FPS
print("Real fps: ", cap.get(cv2.CAP_PROP_FPS))
for i in range(100):
cv2.waitKey(1)
cap.grab() # Grab a frame
now = datetime.now()
print(f'{now.strftime("%M:%S")} ')
```
Typical output:
<details>
<summary>Click to expand</summary>
```
python.exe : [ WARN:0@3.110] global cap_msmf.cpp:929 CvCapture_MSMF::initStream Failed to select stream 0
AB@>:0:1 7=0::1
+ python.exe .\test.py > out.txt 2>&1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: ([ WARN:0@3.110]...select stream 0:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
[ WARN:0@3.110] global cap_msmf.cpp:929 CvCapture_MSMF::initStream Failed to select stream 0
[ WARN:0@3.114] global cap_msmf.cpp:929 CvCapture_MSMF::initStream Failed to select stream 0
[ WARN:0@3.114] global cap_msmf.cpp:929 CvCapture_MSMF::initStream Failed to select stream 0
[ WARN:0@3.117] global cap_msmf.cpp:929 CvCapture_MSMF::initStream Failed to select stream 0
[ WARN:0@3.117] global cap_msmf.cpp:929 CvCapture_MSMF::initStream Failed to select stream 0
Real fps: 7.500001875000469
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:53
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:54
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:55
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
32:56
```
</details>
So, each second camera gives 30 frames, while reported 7.5.
At that, removing first cap.grab() call fixes this problem.
Seems, related to this: https://github.com/opencv/opencv/issues/24000
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: videoio(camera),platform: win32 | low | Critical |
2,566,304,850 | excalidraw | BUG - Elements created within bounds of a group automatically added to group | Not exactly sure if this is a bug or a feature - but frustrating nonetheless!
Having grouped some text and a rectangle, any text created within that group's space. Is this intentional? It means either i) additional elements that should not be grouped have to be created "on the side" and then dragged onto the scene, or ii) that the grouped objects have to be locked only to later be unlocked.
https://github.com/user-attachments/assets/d6252d52-cfeb-4388-a022-9ac6d5a0caeb
| bug | low | Critical |
2,566,315,707 | bitcoin | Prioritize processing of peers based on their CPU usage | ### Please describe the feature you'd like to see added.
Currently, we process messages to/from all peers in a loop where every peer is processed once (has the same weight). The list of peers is shuffled before every loop.
Considering a scenario where we spend considerably more CPU time for some peers compared to others, does it make sense to de-prioritize CPU-hungry peers? This might be happening deliberately (a CPU DoS attack) or not.
For example: if we spent 5 CPU seconds to process each one of the peers `Bad` and `Demanding` and 1 CPU second to process peers `Normal` and `Light`. Then on the next loop, we can process just `Normal` and `Light` so they now account to 2 CPU seconds each and skip `Bad` and `Demanding`. Do a few loops like this until everybody is around 5 CPU seconds and then process all of them again.
I am not sure how much sense this makes, but at least it seems worthy of brainstorming.
### Is your feature related to a problem, if so please describe it.
https://github.com/bitcoin/bitcoin/pull/30572 aims to address a problem of peers sending a lot of costly-to-validate-but-invalid transactions in an attempt to CPU DoS a node. To me it seems that whether those transactions are requested by the victim or are sent unsolicited is secondary. More importantly, willingly or not, some peers are eating the CPU and some not, so this is a broader issue.
### Describe the solution you'd like
Aim to spend approximately the same amount of CPU time for every peer. Or, within some reasonable margin, e.g. if the difference between the lightest and the heaviest peer is more than 10x, then trigger some protective mechanism.
### Describe any alternatives you've considered
Drop unsolicited transactions. IMO that would not really resolve the DoS.
### Please leave any additional context
I got this idea while reading https://delvingbitcoin.org/t/cpu-usage-of-peers/196/2 | Feature,Brainstorming | low | Major |
2,566,345,798 | yt-dlp | Support for ctc.ru | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Russia
### Example URLs
Single video: https://ctc.ru/projects/serials/molodezhka/video/1-sezon/1-serija/
Playlist: https://ctc.ru/projects/serials/molodezhka/video/1-sezon/
### Provide a description that is worded well enough to be understood
ctc.ru seems to be unsupported by yt-dlp.
The provided links involve unprotected m3u8-streams - they're accessible through inspecting web-pages.
Nevertheless, an ability to download series with names provided by site without need to write them manually would be more comfortable.
Some other series are accessible only through subscription but I do not have one.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://ctc.ru/projects/serials/molodezhka/video/1-sezon/1-serija/']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.09.27 from yt-dlp/yt-dlp [c6387abc1] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.1.1, ffprobe 2023-09-07-git-9c9f48e7f2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.09.27 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.09.27 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://ctc.ru/projects/serials/molodezhka/video/1-sezon/1-serija/
[generic] 1-serija: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 1-serija: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://ctc.ru/projects/serials/molodezhka/video/1-sezon/1-serija/
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1761, in __extract_info
File "yt_dlp\extractor\common.py", line 741, in extract
File "yt_dlp\extractor\generic.py", line 2526, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://ctc.ru/projects/serials/molodezhka/video/1-sezon/1-serija/
```
| site-request,triage | low | Critical |
2,566,363,003 | next.js | headers function get method only return full referrer url on page refresh and not on page visited through Link component. | ### Link to the code that reproduces this issue
https://github.com/Rajesh-Poojari-Dmart/nextIssue
### To Reproduce
## Step 1
Add the following `generateMetadata` function in a couple of dynamic page files that are linked to each other using the Link component:
```javascript
export async function generateMetadata() {
const headersList = headers();
const refererUrl = headersList.get("referer") || null;
return {
openGraph: {
url: refererUrl
},
alternates: {
canonical: refererUrl,
},
};
}
```
## Step 2
Suppose you visit from Homepage "/" to some dynamic url "/products/{productName}" using Link component, the above genareteMetadata function adds opengraph and canonical url as "http:localhost:3000/" in head tag of /product/{productName} page, but when I refresh the page /product/{productName} , genareteMetadata function adds "http:localhost:3000/product/{productName}" in opengraph and canonical url.
### Current vs. Expected behavior
### Current behaviour
The headers function get method does not get the full url inside generateMetadata on page visited through Link component.
### Expected behaviour
The headers function should get the full url inside generateMetadata on page visited through Link component.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 21.6.0: Mon Dec 19 20:46:01 PST 2022; root:xnu-8020.240.18~2/RELEASE_ARM64_T8101
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 20.9.0
npm: 10.1.0
Yarn: 1.22.21
pnpm: N/A
Relevant Packages:
next: 14.2.14 // Latest available version is detected (14.2.14).
eslint-config-next: 14.2.14
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Navigation | low | Minor |
2,566,379,223 | react-native | iOS: Singleline TextInput auto grow: text gets cut while typing | ### Description
When having a single-line `<TextInput />` it will automatically grow its width by the length of text entered.
However, it can be observed that there is a slight flickering while entering text.
When examined frame by frame you can see that the text input gets cut.
This doesnโt happen in a clean iOS app using UIKit and UITextView. My expectation is that, as we are using UITextView in `<TextInput />` on iOS, it neither happens in react native.
### Steps to reproduce
1. Open the reproducer snack
2. Run on your iPhone or simulator
3. Record your screen
4. Rapidly type text
5. You might notice a slight feel of flickering while typing
6. In the screen recording, if you step frame by frame, you may notice that while the text is entered the text input gets cut
### React Native Version
0.75.4
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0
CPU: (12) arm64 Apple M2 Pro
Memory: 147.94 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: ~/.nvm/versions/node/v20.17.0/bin/node
Yarn:
version: 3.6.4
path: ~/.nvm/versions/node/v20.15.1/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.17.0/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.14.3
path: /Users/hannomargelo/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "28"
- "30"
- "31"
- "32"
- "33"
- "33"
- "34"
Build Tools:
- 28.0.3
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.1
- 33.0.2
- 34.0.0
- 35.0.0
System Images:
- android-32 | Google APIs ARM 64 v8a
- android-33 | Wear OS 4 ARM 64 v8a
- android-33 | Google APIs ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
- android-33 | Google APIs ATD ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
- android-34 | Google APIs ATD ARM 64
Android NDK: 26.1.10909125
IDEs:
Android Studio: 2024.1 AI-241.19072.14.2412.12360217
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.7
path: /Users/hannomargelo/.jenv/shims/javac
Ruby:
version: 2.7.6
path: /Users/hannomargelo/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: 0.75.4
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
No relevant logs.
```
### Reproducer
https://snack.expo.dev/@hannojg/rntextinput-autogrow-issue
### Screenshots and Videos
| โ React Native TextInput | โ
UIKit UITextView |
|--------|--------|
| <video src="https://github.com/user-attachments/assets/11d8eca0-a5dc-448f-adbc-4a14697f0e27" /> | <video src="https://github.com/user-attachments/assets/09e35955-2b32-4331-9a4b-340126381519" /> | | Component: TextInput,Needs: Triage :mag: | low | Major |
2,566,404,258 | tensorflow | NotImplementedError from tf.constant in trivial case | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.16.1
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.10.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Trying to make a tensor that has the same value for all items in the batch, see the following bare minimum code.
I get `NotImplementedError: cannot convert a symbolic tf.Tensor (custom_model_5_1/strided_slice:0) to a numpy array.`
I am not trying to use numpy, this is an internal error.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import keras
import numpy as np
class CustomModel(keras.models.Model):
def call(self, inputs):
inputs_shape = tf.shape(inputs)
return tf.constant(3.0, shape=(inputs_shape[0], 1), dtype=inputs.dtype) # NotImplementedError
#return 3.0 * tf.ones(shape=(inputs_shape[0], 1), dtype=inputs.dtype) # OK
model = CustomModel()
model.compile(run_eagerly=False, loss="mse") # OK if run_eagerly=True
model.fit(np.array([[0.0]]), np.array([[0.0]]))
```
```
### Relevant log output
```shell
{
"name": "NotImplementedError",
"message": "Exception encountered when calling CustomModel.call().
Cannot convert a symbolic tf.Tensor (custom_model_5_1/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.
Arguments received by CustomModel.call():
โข inputs=tf.Tensor(shape=(None, 1), dtype=float32)",
"stack": "---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[6], line 13
11 model = CustomModel()
12 model.compile(run_eagerly=False, loss=\"mse\") # OK if run_eagerly=True
---> 13 model.fit(np.array([[0.0]]), np.array([[0.0]]))
File /usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
Cell In[6], line 8, in CustomModel.call(self, inputs)
6 def call(self, inputs):
7 inputs_shape = tf.shape(inputs)
----> 8 return tf.constant(3.0, shape=(inputs_shape[0], 1), dtype=inputs.dtype)
File /usr/local/lib/python3.10/dist-packages/numpy/core/fromnumeric.py:3100, in prod(a, axis, dtype, out, keepdims, initial, where)
2979 @array_function_dispatch(_prod_dispatcher)
2980 def prod(a, axis=None, dtype=None, out=None, keepdims=np._NoValue,
2981 initial=np._NoValue, where=np._NoValue):
2982 \"\"\"
2983 Return the product of array elements over a given axis.
2984
(...)
3098 10
3099 \"\"\"
-> 3100 return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
3101 keepdims=keepdims, initial=initial, where=where)
File /usr/local/lib/python3.10/dist-packages/numpy/core/fromnumeric.py:88, in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
85 else:
86 return reduction(axis=axis, out=out, **passkwargs)
---> 88 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
NotImplementedError: Exception encountered when calling CustomModel.call().
Cannot convert a symbolic tf.Tensor (custom_model_5_1/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.
Arguments received by CustomModel.call():
โข inputs=tf.Tensor(shape=(None, 1), dtype=float32)"
}
```
| type:bug,TF 2.16 | low | Critical |
2,566,416,675 | PowerToys | Keyboard stops working on remote machine. Mouse continues to work. | ### Microsoft PowerToys version
0.85.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
When trying to control the mouse and keyboard on my remote laptop, the mouse will work but not the keyboard. Happens when I am just starting to use it for the day.
### โ๏ธ Expected Behavior
Keyboard and mouse should work on the remote laptop.
### โ Actual Behavior
Only the mouse works but not the keyboard.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,566,436,247 | excalidraw | Paste Event doesn't get invoked within Shadow Root | Thank you for your work.
The issue I'm seeing is that the paste event doesn't get recognized when I use excalidraw within a shadow root. There is probably an easy fix: I assume if the event target here would be set to the correct shadow root, everything would work well: https://github.com/excalidraw/excalidraw/blob/47ee8a00945793340bb20715b2f383a4c3da2139/packages/excalidraw/components/App.tsx#L2599
Here's a minimal repro: https://codesandbox.io/p/sandbox/empty-violet-68d58w?workspaceId=01e8ba10-da5e-408e-b38e-262278fa125d
(you'll have to use the browser dev tools because codesandbox doesn't show shadow roots in their embedded dev tools)
I'm happy to contribute a fix if you tell me to go ahead with it. Shadow root support has to be "hacked in" because styles are attached globally. What's the current strategy on this? Have you guys talked / decided on a strategy concerning general shadow root support? Will it help if I propose a fix?
Best Regards! | bug | low | Minor |
2,566,449,902 | transformers | DataCollatorWithFlattening is incompatible with non - list input ids | ### System Info
latest transformers
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
example = tokenizer("A test sentence", return_tensors="pt")
example = {k: v.flatten() for k, v in tensor_example.items()}
collator([example]*2)
```
### Expected behavior
Collator should work with all output types supported by tokenizer. | Feature request,bug,HACKTOBERFEST-ACCEPTED | low | Minor |
2,566,454,087 | flutter | iOS: Ensure all delegates use weak pointers and eliminate manual nilling of pointers in [FlutterEngine/FlutterViewController dealloc] | Followup to migrating all iOS translation units to compilation with ARC.
https://github.com/flutter/engine/blob/205484009711c2f0d85a4c45666d187184493b41/shell/platform/darwin/ios/framework/Source/FlutterEngine.mm#L299-L321
and
https://github.com/flutter/engine/blob/d38f5e560a98f491fb1b3e24e19cc78d2b110b99/shell/platform/darwin/ios/framework/Source/FlutterViewController.mm#L979-L988 | engine,P2,c: tech-debt,team-ios,triaged-ios | low | Minor |
2,566,466,658 | TypeScript | `asserts this is T` does not narrow type of `this` | ### ๐ Search Terms
- `asserts this is T`
- `narrow`
- `assertions`
- I also checked #59707, but it is unrelated (generic types).
### ๐ Version & Regression Information
- Playground: `v5.7.0-dev.20241004`
- At least since `v5.6.2`
### โฏ Playground Link
https://www.typescriptlang.org/play/?#code/MYGwhgzhAECC0G8BQ1oHsB2ICeARApgGYCWG+AJgJIawAUAlIiqtMJhGiPgHQhoDmtAESwh9ANzMAvkmaQI+AE4AXOvQBc0eUuUxlAC2IwjcRNBkzZoedABCTVNpVrNT3dAMmT8BOaSXlbAAHfGgAYWgAXlMAHztJJEIAVwxgZWJMLSgdOmBNMI0shRUYYGhvJktk1PTM5XwIZQBGWjzwxmRUYG43NUku7kwcAhIyKhoGcWgAemnoAFFFRTRFTQAFZZCVbGgAciG8IlIKalhd6HI0BugMNGVofAAPI3u64NDdsN3uaA20LcCewOI2O4zOFyuMFu9yeL3QGA87z2tm+-lk1TSGQR9UaACZWvkOnJss5WhJmN1gUcxqdJjM5gB1FYAawgAEI0dAgA
### ๐ป Code
```ts repro
class A {
onlyDefinedInA() {
console.log("A");
}
assertA(): asserts this is A { }
}
class B {
assertA(): asserts this is A { }
}
type C = A | B;
function assertA(c: C): asserts c is A {
}
function test1(c: C) {
c.assertA();
c.onlyDefinedInA(); // Error: Property 'onlyDefinedInA' does not exist on type 'C'. Property 'onlyDefinedInA' does not exist on type 'B'.
}
function test2(c: C) {
assertA(c);
c.onlyDefinedInA(); // Works!
}
```
[Workbench Repro](https://www.typescriptlang.org/dev/bug-workbench/?#code/MYGwhgzhAECC0G8BQ1oHsB2ICeARApgGYCWG+AJgJIawAUAlIiqtMJhGiPgHQhoDmtAESwh9ANzMAvkmaQI+AE4AXOvQBc0eUuUxlAC2IwjcRNBkzZoedABCTVNpVrNT3dAMmT8BOaSXlbAAHfGgAYWgAXlMAHztJJEIAVwxgZWJMLSgdOmBNMI0shRUYYGhvJktk1PTM5XwIZQBGWjzwxmRUYG43NUku7kwcAhIyKhoGcWgAemnoAFFFRTRFTQAFZZCVbGgAciG8IlIKalhd6HI0BugMNGVofAAPI3u64NDdsN3uaA20LcCewOI2O4zOFyuMFu9yeL3QGA87z2tm+-lk1TSGQR9UaACZWvkOnJss5WhJmN1gUcxqdJjM5gB1FYAawgAEI0UA)
### ๐ Actual behavior
Type of `c` is not narrowed after the method call, producing an error.
### ๐ Expected behavior
I expect the member function to to perform type narrowing.
### Additional information about the issue
_No response_ | Cursed?,Possible Improvement | low | Critical |
2,566,495,246 | transformers | Centralized Page for Domain Whitelisting (Hugging Face Models) | ### Feature request
Is it possible to create a dedicated page that lists all the domains where Hugging Face models, such as those stored on cdn-lfs.hf.co, are hosted? This page would serve as a reference for users working in corporate environments, where whitelisting of individual domains is required.
### Motivation
Many organizations, particularly those behind corporate networks, require that domains be whitelisted individually for security and access reasons. For users working in such environments, identifying and manually whitelisting the necessary domains for model access can be a cumbersome process. A centralized list of the domains Hugging Face uses to host and serve models would significantly streamline this process, ensuring seamless access to models across corporate networks without delays or security issues.
| Feature request | low | Major |
2,566,512,436 | angular | Certain hierarchies of animated components don't work | ### Which @angular/* package(s) are the source of the bug?
animations
### Is this a regression?
No
### Description
I have **MainComponent**, inside of it there are tabs (**Content1** and **Content2** components) represented via `@switch` statement.
Based on `activeTab` I render one of them.
**Mainะกomponent** is animated when opened/closed, so it has
```
host: { '[@mainComponentAnimation]': '' },
animations: [mainComponentAnimations]
```
`mainComponentAnimations` also contains animations for switching the tabs - `@contentAnimations`. These are used like this:
```
@switch (activeTab) {
@case (1) {
<app-content1 @contentAnimation></app-content1>
}
@case (2) {
<app-content2 @contentAnimation></app-content2>
}
}
```
**Content2** has its own content animations, so it has `animations: [...] too`.
**THE ISSUE:**
`:leave` animation of **Content2** (from `@contentAnimation`) is broken when both of these defined:
1. `host: { '[@mainComponentAnimation]': '' }` in **MainComponent**.
2. `animations: [...]` in **Content2**.
Meaning if you comment out any of those, it works fine.
Play around with the reproduction and see for yourself.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-hyvqbj
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.2.5
Node: 20.11.1
Package Manager: yarn 1.22.19
OS: win32 x64
Angular: 18.2.5
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, language-service, material, material-moment-adapter
... platform-browser, platform-browser-dynamic, platform-server
... router, service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.5
@angular-devkit/build-angular 18.2.5
@angular-devkit/core 18.2.5
@angular-devkit/schematics 18.2.5
@angular/fire 18.0.1
@schematics/angular 18.2.5
rxjs 7.8.1
typescript 5.5.4
webpack 5.94.0
zone.js 0.15.0
### Anything else?
_No response_ | area: animations | low | Critical |
2,566,513,776 | transformers | Major VLM tracker (standardize the API) | ### Feature request
This will track general plans on VLM and composite models so that we can align with work in TGI and other libraries. I already have some trackers so in this one I'll lay out a more bigger picture with links to respective discussions/topics
### Motivation
We already have a pretty good working standards when it comes to language models, and when adding a new model usually a few "copy from" statements will do the work. We also cover most cases for LMs in out test suite. But for wave of multimodal models we still lack any form of standardization and uniform API. Each new model added to the library introduces something new, that forces us to accept it as is until we figure out how to handle it later
So we need to try to standardize those models, currently starting from VLMs. VLMs are the most commonly added models currently, but we may have more audio+text or pure multimodal ones in the future. For now we start off by working on VLM and see how things fit in the general API
### Your contribution
The major changes we are working on and planning to work are:
- Standardization for Processors:
- We have ongoing work on uniform processor kwargs which currently will help us enable pipelines for VLMs and thus we can have correct automodel tag on the hub. The work is under progress by @yonigozlan and @molbap
- Parallel to that I will work on separating out video models under a new class (VideoProcessor) and handling a whole lot of deprecation cycle for the processing config files. At the end we should have separate file/separate class for video processing and save its params in its own config file. That will be tracked in https://github.com/huggingface/transformers/issues/33504 and has discussions with Amy in the linked issue under that
- Standardization in terms of modeling code:
- One major thing was to get rid of buggy `merge_embeds` method and cover VLMs with more generation related tests, as we were getting many issues after a small change. Slow tests unfortunately don't cover everything and are not run every time a PR is merged. That is being tracked in https://github.com/huggingface/transformers/issues/33374
- Another major topic is setting attention implementation for composite models (not only VLMs) which will fix red CI and add uniformity to how we work with composite models in general. After that PR we should enforce each composite model to have a separate PreTrainedConfig for each model backbone in its architecture. And each sub-config should be part of one major ModelConfig which may hold specific attr for the composte model only (not its sub-backbones). See https://github.com/huggingface/transformers/pull/32238
- Separate out `get_image_features` method for all VLMs so we can have more modularity and prob make the code much cleaner. Was proposed by one of the community contributor and I'll handle propagating the change in all models. See https://github.com/huggingface/transformers/pull/33696
- Standardization for chat templates:
- We can support `(tokenize=True, return_tensors="pt")` kwargs in processor's apply_chat_template, so that the method returns already vectorized outputs. Similar to tokenizers, the main point is to feed in a chat history and get tensor inputs ready for generation/train. The only difference is that users will have to explicitly add image file/url or `ImageInput` so we can process it internally and turn into `pixel_values`. Below is the general design. No work started yet, I am planning to make a PR some time in October
```python
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": {"url": "https://...."}}},
{"type": "text", "text": "What do you see here?"},
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "Stop sign [...]"},
]
},
{
"role": "user",
"content": [
{"type": "image", "image": {"path": "my_image.png"}}},
{"type": "text", "text": "What color is the cat?"},
]
},
]
```
- Standardization for tokenizers:
- We can have new special tokens added to the tokinizers if they are loaded from a VLM model repo. Currently I have a plan to add at lest 3 new special tokens (image, boi and eoi), but given a wave of new models I might expand that list. I had a PR prev but that was a very basic design (https://github.com/huggingface/transformers/pull/31967). Currently working on making `SpecialTokenMixin` more flexible so that we can simply change the class attribute `SPECIAL_TOKENS_ATTRIBUTES` and everything else will work out-of-the-box. Seems to me the easiest way to expand special tokens for multimodal cases without flooding simple language model tokenizers. | Discussion,WIP,Vision | low | Critical |
2,566,519,291 | godot | rigged models display strange scale and lose sync with skeletons when switching tabs | ### Tested versions
4.4 dev3
### System information
Godot v4.4.dev3 - Windows 10.0.22631 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 (NVIDIA; 32.0.15.6094) - 12th Gen Intel(R) Core(TM) i5-12400F (12 threads)
### Issue description
rigged meshes with a skeleton change to an odd scale and display odd syncing with skeletons when changing tabs in the editor. this is fixed when reloading the scene but appears to be a new issue with 4.4
### Steps to reproduce
set a rigged model (possibly requires an animation player) to editable children, switch scenes in editor.
### Minimal reproduction project (MRP)

| bug,topic:3d | low | Minor |
2,566,530,691 | godot | Compilation failure with `lto=full` on Windows with MSVC `cl.exe`: LNK1248: image size exceeds maximum allowable size (FFFFFFFF) | ### Tested versions
master f032af74536b317b23c7fca3bc7318ced5537344
### System information
Windows 11 - Intel i5 9600k
### Issue description
Compilation randomy fails after a while:
```
[...}
Compiling scene\gui\box_container.cpp ...
Compiling scene\gui\button.cpp ...
Compiling scene\gui\center_container.cpp ...
Compiling scene\gui\check_box.cpp ...
editor\editor.windows.editor.x86_64.lib : fatal error LNK1248: image size (100764120) exceeds maximum allowable size (FFFFFFFF)
Compiling scene\gui\check_button.cpp ...
Compiling scene\gui\code_edit.cpp ...
Compiling scene\gui\color_mode.cpp ...
Compiling scene\gui\color_picker.cpp ...
Compiling scene\gui\color_rect.cpp ...
Compiling scene\gui\container.cpp ...
scons: *** [editor\editor.windows.editor.x86_64.lib] Error 1248
scons: building terminated because of errors.
[Time elapsed: 00:08:12.40]
```
### Steps to reproduce
clone godot
`scons p=windows arch=x86_64 production=yes lto=full target=editor optimize=speed deprecated=no`
### Minimal reproduction project (MRP)
- | bug,platform:windows,topic:buildsystem,confirmed,regression | low | Critical |
2,566,548,646 | PowerToys | More comfortable window resizing functionality | ### Description of the new feature / enhancement
It would be nice to have some feature(s) of NiftyWindows, which isn't further maintained since several years.
E.g. (Extract from NiftyWindows help (https://ahkscript.github.io/NiftyWindows/features/)):
* **/RIGHT_BUTTON+DRAG/**
This is the most powerful feature of NiftyWindows. The area of
every window is tiled in a virtual 9-cell grid with three columns
and rows. The center cell is the largest one and you can grab and
move a window around by clicking and holding it with the right
mouse button. **The other eight corner cells are used to resize a
resizable window in the same manner.**
Note: This drag and resize feature is not provided on some certain
windows because this mouse behaviour may have a special handling
controlled by the application itself. You can use the forced mode
(see explanation in the introduction of 'features') to manipulate
these windows as well. Currently the following windows (and its
possible child windows) have been taken into account: ...
### Scenario when this would be used?
It is more comfortable to resize e.g. an editor-window, because the mouse doesn't need to be placed on exactly on the borders of the window. Instead the mouse just needs to be positioned e.g. in the first 1/3 area pane of the window, which is much larger, than the small border of the window.
### Supporting information
Because NiftyWindows was rather cool, e.g. autohotkey tries also to emulate the functionality of NiftyWindows:
https://www.autohotkey.com/board/topic/2460-niftywindows/ | Needs-Triage | low | Minor |
2,566,560,799 | transformers | Request for Iterative Generation in Pipeline (e.g., LLaMA model) | ### Feature request
I would like to ask if there is a way to perform iterative generation (n times) within the pipeline, specifically for models like LLMs. If this feature is not available, is there any plan to implement it in the future?
Example:
```python
pipeline = transformers.pipeline(
"text-generation",
model="meta-llama/Llama-3.1-8B-Instruct",
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
# Generate once
outputs = llama_client(
messages,
max_new_tokens=max_tokens
)
# Generate n times
outputs = llama_client(
messages,
max_new_tokens=max_tokens,
n = n
)
```
Similar GPT API
```python
response = client.chat.completions.create(
model=model,
messages=messages,
max_tokens=max_tokens,
temperature=temperature,
n=n,
)
```
I am also aware that iterative generation can be done using a for loop, but I am wondering if there is a more efficient or optimized way to generate multiple iterations (n times) within the pipeline for models.
https://community.openai.com/t/how-does-n-parameter-work-in-chat-completions/288725
### Motivation
build connection between LLM api and transformer pipeline
### Your contribution
Request | Feature request | low | Minor |
2,566,570,878 | ollama | VideoCore GPU support | Required to be able to run models on RaspberryPi's GPU. | feature request | low | Minor |
2,566,572,019 | rust | Tracking issue for release notes of #111645: Tracking Issue for `UnsafeCell::from_mut` |
This issue tracks the release notes text for #111645.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
```markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for `UnsafeCell::from_mut`](https://github.com/rust-lang/rust/issues/111645)
```
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
```markdown
```
cc @JoJoJet -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,566,572,076 | rust | Tracking issue for release notes of #111735: Tracking Issue for `BufRead::skip_until` |
This issue tracks the release notes text for #111735.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
```markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for `BufRead::skip_until`](https://github.com/rust-lang/rust/issues/111735)
```
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
```markdown
```
cc @WilliamVenner -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,566,616,739 | go | x/pkgsite: support markdown alerts in readme | ### What is the URL of the page with the issue?
https://pkg.go.dev/github.com/docker/buildx#section-readme
### What is your user agent?
All user agents
> [!NOTE]
> - This issue affects all web browsers.
> - GitHub introduced markdown alerts [in 2023](https://github.blog/changelog/2023-12-14-new-markdown-extension-alerts-provide-distinctive-styling-for-significant-content/) as an extension to [GfM](https://github.github.com/gfm/) (2019).
### Screenshot
Got unformatted alerts (e.g. `[!WARNING]`) at https://pkg.go.dev/github.com/docker/buildx#section-readme:

Got formatted alerts on GitHub at https://github.com/docker/buildx:

### What did you do?
View projects using markdown alerts like github.com/docker/buildx on pkg.go.dev.
Or use any of these GitHub markdown alerts in a README.md.
```
> [!NOTE]
> Useful information that users should know, even when skimming content.
> [!TIP]
> Helpful advice for doing things better or more easily.
> [!IMPORTANT]
> Key information users need to know to achieve their goal.
> [!WARNING]
> Urgent info that needs immediate user attention to avoid problems.
> [!CAUTION]
> Advises about risks or negative outcomes of certain actions.
```
### What did you see happen?
Markdown alerts are not formatted as alerts at pkg.go.dev.
For example, `[!WARNING]`, `[!IMPORTANT]`, and `[!NOTE]` alerts are not formatted as alerts.
### What did you expect to see?
Expected to see markdown alerts formatted as alerts. [GitHub documentation](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#alerts) lists these alerts:
> [!NOTE]
> Useful information that users should know, even when skimming content.
> [!TIP]
> Helpful advice for doing things better or more easily.
> [!IMPORTANT]
> Key information users need to know to achieve their goal.
> [!WARNING]
> Urgent info that needs immediate user attention to avoid problems.
> [!CAUTION]
> Advises about risks or negative outcomes of certain actions.
EDIT: Added note that GitHub introduced markdown alerts [in 2023](https://github.blog/changelog/2023-12-14-new-markdown-extension-alerts-provide-distinctive-styling-for-significant-content/) as an extension to [GfM](https://github.github.com/gfm/) (2019). Also clarified and reordered screenshots. | FeatureRequest,pkgsite | low | Minor |
2,566,641,556 | PowerToys | PowerToys.MonacoPreviewHandler crashes when rendering UTF-8 BOM | ### Microsoft PowerToys version
0.85.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
File Explorer: Preview Pane
### Steps to reproduce
The PowerToys.MonacoPreviewHandler.exe crashes when trying to render UTF-8 BOM file

### โ๏ธ Expected Behavior
To render the file correctly
### โ Actual Behavior
PowerToys.MonacoPreviewHandler.exe crashes
### Other Software
_No response_ | Issue-Bug,Severity-High,Product-File Explorer | low | Critical |
2,566,653,800 | vscode | Inlay hints appearing when they should not | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: I don't think so but it doesn't seem like an extension issue.
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.0
- OS Version: Windows 11 Pro
Steps to Reproduce:
1. Have a project with dart and `Dart-code/Dart-code` (https://marketplace.visualstudio.com/items?itemName=Dart-Code.dart-code)
2. Open the project:
https://github.com/user-attachments/assets/67c595d0-ecdd-45e6-ad32-e2bd6b9b8c56
On previous versions this would not appear unless holding down the keys.
---
@DanTup | info-needed,inlay-hints | medium | Critical |
2,566,703,682 | go | runtime: use .openbsd.randomdata for startupRand | OpenBSD has its version of AT_RANDOM, the .openbsd.randomdata ELF section documented at SPECS.randomdata. We should use that instead of reading from `/dev/urandom`. | help wanted,OS-OpenBSD,NeedsFix,compiler/runtime | low | Minor |
2,566,717,747 | flutter | Autofill email not working for proton pass | ### Steps to reproduce
Setup autofill demo with email autofill hinting
### Expected results
Proton Pass can autofill email without having to copy paste from their extension
### Actual results
Email file is marked as `type="text"` which breaks proton pass' input field detection. This exists in versions 3.19 through 3.24.
### Code sample
<details open><summary>Code sample</summary>
```dart
AutofillGroup(
child: Column(
children: [
TextFormField(
keyboardType: TextInputType.emailAddress,
autofillHints: [AutofillHints.email],
autocorrect: false,
),
TextFormField(
keyboardType: TextInputType.visiblePassword,
autofillHints: [AutofillHints.password],
autocorrect: false,
obscureText: true,
),
],
),
)
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="363" alt="image" src="https://github.com/user-attachments/assets/a7da73fb-2aa2-4967-89f2-69a5d9ff6e83">
<img width="342" alt="image" src="https://github.com/user-attachments/assets/64cca3ea-5dd8-4664-bbfc-cf0c20af3875">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.22.2, on macOS 14.5 23F79 darwin-arm64, locale en-US)
โข Flutter version 3.22.2 on channel stable at /Users/stoom/fvm/versions/3.22.2
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 761747bfc5 (4 months ago), 2024-06-05 22:15:13 +0200
โข Engine revision edd8546116
โข Dart version 3.4.3
โข DevTools version 2.34.3
[โ] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
โข Android SDK at /Users/stoom/Library/Android/Sdk
โข Platform android-34, build-tools 33.0.2
โข ANDROID_HOME = /Users/stoom/Library/Android/Sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.3)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15E204a
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web (Cannot find Chrome executable at /Application/Microsoft Edge.app)
! /Application/Microsoft Edge.app is not executable.
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] VS Code (version 1.93.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.98.0
[โ] Connected device (2 available)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.5 23F79 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.5 23F79 darwin-arm64
[โ] Network resources
โข All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,engine,platform-web,P2,team-web,triaged-web | low | Major |
2,566,724,650 | vscode | Git - VSCode Git extension doesn't ask for remote user password, when trying to clone via ssh | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0
- OS Version: Operating System: Manjaro Linux
KDE Plasma Version: 6.1.5
KDE Frameworks Version: 6.5.0
Qt Version: 6.7.2
Kernel Version: 6.6.52-1-MANJARO (64-bit)
Graphics Platform: Wayland
I noticed that when you try to clone a repository with the Git extension via SSH and don't have public key authentication configured, VSCode doesn't ask for the remote user password.
Instead a not very meaningful error is shown:

In the logs you can find more information:
```
> git clone ssh://zzz2324188@example.com:4711/srv/pk/git/pk2425/VOB02/zzz2324188.git /home/laurenz/checkouts_pp/test/zzz2324188 --progress
/usr/lib/code/extensions/git/dist/askpass-main.js:1
(()=>{"use strict";var e={7549:(e,s,r)=>{Object.defineProperty(s,"__esModule",{value:!0}),s.IPCClient=void 0;const t=r(8611);s.IPCClient=class{constructor(e){this.handlerName=e;const s=process.env.VSCODE_GIT_IPC_HANDLE;if(!s)throw new Error("Missing VSCODE_GIT_IPC_HANDLE");this.ipcHandlePath=s}call(e){const s={socketPath:this.ipcHandlePath,path:`/${this.handlerName}`,method:"POST"};return new Promise(((r,n)=>{const o=t.request(s,(e=>{if(200!==e.statusCode)return n(new Error(`Bad status code: ${e.statusCode}`));const s=[];e.on("data",(e=>s.push(e))),e.on("end",(()=>r(JSON.parse(Buffer.concat(s).toString("utf8")))))}));o.on("error",(e=>n(e))),o.write(JSON.stringify(e)),o.end()}))}}},9896:e=>{e.exports=require("fs")},8611:e=>{e.exports=require("http")}},s={};function r(t){var n=s[t];if(void 0!==n)return n.exports;var o=s[t]={exports:{}};return e[t](o,o.exports,r),o.exports}var t={};(()=>{var e=t;Object.defineProperty(e,"__esModule",{value:!0});const s=r(9896),n=r(7549);function o(e){console.error("Missing or invalid credentials."),console.error(e),process.exit(1)}!function(e){if(!process.env.VSCODE_GIT_ASKPASS_PIPE)return o("Missing pipe");if(!process.env.VSCODE_GIT_ASKPASS_TYPE)return o("Missing type");if("https"!==process.env.VSCODE_GIT_ASKPASS_TYPE&&"ssh"!==process.env.VSCODE_GIT_ASKPASS_TYPE)return o(`Invalid type: ${process.env.VSCODE_GIT_ASKPASS_TYPE}`);if("fetch"===process.env.VSCODE_GIT_COMMAND&&process.env.VSCODE_GIT_FETCH_SILENT)return o("Skip silent fetch commands");const r=process.env.VSCODE_GIT_ASKPASS_PIPE,t=process.env.VSCODE_GIT_ASKPASS_TYPE,i="https"===t?e[2]:e[3];let c,a,p;"https"===t&&(c=e[4].replace(/^["']+|["':]+$/g,"")),"ssh"===t&&(/passphrase/i.test(i)?a=e[6]?.replace(/^["']+|["':]+$/g,""):(c=e[6].replace(/^["']+|["':]+$/g,""),p=e[15])),new n.IPCClient("askpass").call({askpassType:t,request:i,host:c,file:a,fingerprint:p}).then((e=>{s.writeFileSync(r,e+"\n"),setTimeout((()=>process.exit(0)),0)})).catch((e=>o(e)))}(process.argv)})();var n=exports;for(var o in t)n[o]=t[o];t.__esModule&&Object.defineProperty(n,"__esModule",{value:!0})})();
TypeError: Cannot read properties of undefined (reading 'replace')
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1748
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1967
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1983
at Object.<anonymous> (/usr/lib/code/extensions/git/dist/askpass-main.js:1:2089)
at Module._compile (node:internal/modules/cjs/loader:1373:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1432:10)
at Module.load (node:internal/modules/cjs/loader:1215:32)
at Module._load (node:internal/modules/cjs/loader:1031:12)
at c._load (node:electron/js2c/node_init:2:13801)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:189:12)
Node.js v20.16.0
Permission denied, please try again.
/usr/lib/code/extensions/git/dist/askpass-main.js:1
(()=>{"use strict";var e={7549:(e,s,r)=>{Object.defineProperty(s,"__esModule",{value:!0}),s.IPCClient=void 0;const t=r(8611);s.IPCClient=class{constructor(e){this.handlerName=e;const s=process.env.VSCODE_GIT_IPC_HANDLE;if(!s)throw new Error("Missing VSCODE_GIT_IPC_HANDLE");this.ipcHandlePath=s}call(e){const s={socketPath:this.ipcHandlePath,path:`/${this.handlerName}`,method:"POST"};return new Promise(((r,n)=>{const o=t.request(s,(e=>{if(200!==e.statusCode)return n(new Error(`Bad status code: ${e.statusCode}`));const s=[];e.on("data",(e=>s.push(e))),e.on("end",(()=>r(JSON.parse(Buffer.concat(s).toString("utf8")))))}));o.on("error",(e=>n(e))),o.write(JSON.stringify(e)),o.end()}))}}},9896:e=>{e.exports=require("fs")},8611:e=>{e.exports=require("http")}},s={};function r(t){var n=s[t];if(void 0!==n)return n.exports;var o=s[t]={exports:{}};return e[t](o,o.exports,r),o.exports}var t={};(()=>{var e=t;Object.defineProperty(e,"__esModule",{value:!0});const s=r(9896),n=r(7549);function o(e){console.error("Missing or invalid credentials."),console.error(e),process.exit(1)}!function(e){if(!process.env.VSCODE_GIT_ASKPASS_PIPE)return o("Missing pipe");if(!process.env.VSCODE_GIT_ASKPASS_TYPE)return o("Missing type");if("https"!==process.env.VSCODE_GIT_ASKPASS_TYPE&&"ssh"!==process.env.VSCODE_GIT_ASKPASS_TYPE)return o(`Invalid type: ${process.env.VSCODE_GIT_ASKPASS_TYPE}`);if("fetch"===process.env.VSCODE_GIT_COMMAND&&process.env.VSCODE_GIT_FETCH_SILENT)return o("Skip silent fetch commands");const r=process.env.VSCODE_GIT_ASKPASS_PIPE,t=process.env.VSCODE_GIT_ASKPASS_TYPE,i="https"===t?e[2]:e[3];let c,a,p;"https"===t&&(c=e[4].replace(/^["']+|["':]+$/g,"")),"ssh"===t&&(/passphrase/i.test(i)?a=e[6]?.replace(/^["']+|["':]+$/g,""):(c=e[6].replace(/^["']+|["':]+$/g,""),p=e[15])),new n.IPCClient("askpass").call({askpassType:t,request:i,host:c,file:a,fingerprint:p}).then((e=>{s.writeFileSync(r,e+"\n"),setTimeout((()=>process.exit(0)),0)})).catch((e=>o(e)))}(process.argv)})();var n=exports;for(var o in t)n[o]=t[o];t.__esModule&&Object.defineProperty(n,"__esModule",{value:!0})})();
TypeError: Cannot read properties of undefined (reading 'replace')
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1748
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1967
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1983
at Object.<anonymous> (/usr/lib/code/extensions/git/dist/askpass-main.js:1:2089)
at Module._compile (node:internal/modules/cjs/loader:1373:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1432:10)
at Module.load (node:internal/modules/cjs/loader:1215:32)
at Module._load (node:internal/modules/cjs/loader:1031:12)
at c._load (node:electron/js2c/node_init:2:13801)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:189:12)
Node.js v20.16.0
Permission denied, please try again.
/usr/lib/code/extensions/git/dist/askpass-main.js:1
(()=>{"use strict";var e={7549:(e,s,r)=>{Object.defineProperty(s,"__esModule",{value:!0}),s.IPCClient=void 0;const t=r(8611);s.IPCClient=class{constructor(e){this.handlerName=e;const s=process.env.VSCODE_GIT_IPC_HANDLE;if(!s)throw new Error("Missing VSCODE_GIT_IPC_HANDLE");this.ipcHandlePath=s}call(e){const s={socketPath:this.ipcHandlePath,path:`/${this.handlerName}`,method:"POST"};return new Promise(((r,n)=>{const o=t.request(s,(e=>{if(200!==e.statusCode)return n(new Error(`Bad status code: ${e.statusCode}`));const s=[];e.on("data",(e=>s.push(e))),e.on("end",(()=>r(JSON.parse(Buffer.concat(s).toString("utf8")))))}));o.on("error",(e=>n(e))),o.write(JSON.stringify(e)),o.end()}))}}},9896:e=>{e.exports=require("fs")},8611:e=>{e.exports=require("http")}},s={};function r(t){var n=s[t];if(void 0!==n)return n.exports;var o=s[t]={exports:{}};return e[t](o,o.exports,r),o.exports}var t={};(()=>{var e=t;Object.defineProperty(e,"__esModule",{value:!0});const s=r(9896),n=r(7549);function o(e){console.error("Missing or invalid credentials."),console.error(e),process.exit(1)}!function(e){if(!process.env.VSCODE_GIT_ASKPASS_PIPE)return o("Missing pipe");if(!process.env.VSCODE_GIT_ASKPASS_TYPE)return o("Missing type");if("https"!==process.env.VSCODE_GIT_ASKPASS_TYPE&&"ssh"!==process.env.VSCODE_GIT_ASKPASS_TYPE)return o(`Invalid type: ${process.env.VSCODE_GIT_ASKPASS_TYPE}`);if("fetch"===process.env.VSCODE_GIT_COMMAND&&process.env.VSCODE_GIT_FETCH_SILENT)return o("Skip silent fetch commands");const r=process.env.VSCODE_GIT_ASKPASS_PIPE,t=process.env.VSCODE_GIT_ASKPASS_TYPE,i="https"===t?e[2]:e[3];let c,a,p;"https"===t&&(c=e[4].replace(/^["']+|["':]+$/g,"")),"ssh"===t&&(/passphrase/i.test(i)?a=e[6]?.replace(/^["']+|["':]+$/g,""):(c=e[6].replace(/^["']+|["':]+$/g,""),p=e[15])),new n.IPCClient("askpass").call({askpassType:t,request:i,host:c,file:a,fingerprint:p}).then((e=>{s.writeFileSync(r,e+"\n"),setTimeout((()=>process.exit(0)),0)})).catch((e=>o(e)))}(process.argv)})();var n=exports;for(var o in t)n[o]=t[o];t.__esModule&&Object.defineProperty(n,"__esModule",{value:!0})})();
TypeError: Cannot read properties of undefined (reading 'replace')
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1748
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1967
at /usr/lib/code/extensions/git/dist/askpass-main.js:1:1983
at Object.<anonymous> (/usr/lib/code/extensions/git/dist/askpass-main.js:1:2089)
at Module._compile (node:internal/modules/cjs/loader:1373:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1432:10)
at Module.load (node:internal/modules/cjs/loader:1215:32)
at Module._load (node:internal/modules/cjs/loader:1031:12)
at c._load (node:electron/js2c/node_init:2:13801)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:189:12)
Node.js v20.16.0
zzz2324188@example.com: Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
I am aware, that the optimal solution would be to copy the public key to the remote and let SSH handle the authentication, or use HTTPS and the Git credential manager (which I have installed, but apparently it doesn't open up for SSH connections).
But there can be situations where none of this workarounds are viable.
Steps to Reproduce:
1. F1
2. Git Clone
3. Enter something like this `ssh://user@example.com:4711/path/to/your/repo.git
Edit: The same error occurs for all other Git actions that involve the remote, like pushing, pulling, etc. | bug,git | low | Critical |
2,566,728,290 | Python | Consolidating LCS and LIS Algorithm Implementations into a Single File | ### Feature description
Hello
I would like to work on this issue and consolidate the algorithms for **Longest Common Subsequence (LCS)** and **Longest Increasing Subsequence (LIS)** into one file. Could you please assign this issue to me? I am excited to contribute and will ensure the file is well-organized with clear documentation for each method.
Thank you! | enhancement | low | Minor |
2,566,735,021 | go | encoding/asn1: invalid DER encodings of `GeneralizedTime` when time is not UTC | ### Go version
go version go1.23.2 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/william/Library/Caches/go-build'
GOENV='/Users/william/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/william/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/william/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.2/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.2/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/william/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/k6/zk7vnkms1m73w0vrxjm7b2s40000gn/T/go-build3515925324=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
My colleague @darkamaul noticed an invalid `GeneralizedTime` encoding while tracking down some DER decoding failures in the responses produced by a Go implementation of an [RFC 3161 Time Stamp Authority].
[RFC 3161 Time Stamp Authority]: https://www.ietf.org/rfc/rfc3161.txt
### What did you see happen?
We observed DER encodings of `GeneralizedTime` objects with explicit timezone offsets, e.g.:
```
GeneralizedTime 2024-10-04 11:04:31 UTC+02:00
```
This is an invalid DER encoding of a `GeneralizedTime`, per the DER encoding rules defined in ITU-T X.690. In particular, DER requires that all `GeneralizedTime` encodings be UTC time with the `Z` designator per X.690 11.7.1:
> The encoding shall terminate with a "Z", as described in the ITU-T Rec. X.680 | ISO/IEC 8824-1 clause on GeneralizedTime.
(Ref: <https://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf>, page 19)
After looking into it, we determined that the codebase was using `encoding/asn1`'s `Marshal` implementation, in particular for marshalling `time.Time` objects into `GeneralizedTime` encodings.
For example:
```go
// eContent within SignedData is TSTInfo
type tstInfo struct {
// .. snip
Time time.Time `asn1:"generalized"`
// .. snip
}
```
Permalink: <https://github.com/digitorus/timestamp/blob/220c5c2851b7435eea999de3daa773601a7ca126/rfc3161_struct.go#L57>
We then checked the underlying `Marshal` implementation and its `GeneralizedTime` helper (`appendGeneralizedTime`), and confirmed that it emits a relative offset instead of normalizing to UTC when the origin `time.Time` is not already UTC:
```go
func appendGeneralizedTime(dst []byte, t time.Time) (ret []byte, err error) {
year := t.Year()
if year < 0 || year > 9999 {
return nil, StructuralError{"cannot represent time as GeneralizedTime"}
}
dst = appendFourDigits(dst, year)
return appendTimeCommon(dst, t), nil
}
func appendTimeCommon(dst []byte, t time.Time) []byte {
_, month, day := t.Date()
dst = appendTwoDigits(dst, int(month))
dst = appendTwoDigits(dst, day)
hour, min, sec := t.Clock()
dst = appendTwoDigits(dst, hour)
dst = appendTwoDigits(dst, min)
dst = appendTwoDigits(dst, sec)
_, offset := t.Zone()
switch {
case offset/60 == 0:
return append(dst, 'Z')
case offset > 0:
dst = append(dst, '+')
case offset < 0:
dst = append(dst, '-')
}
offsetMinutes := offset / 60
if offsetMinutes < 0 {
offsetMinutes = -offsetMinutes
}
dst = appendTwoDigits(dst, offsetMinutes/60)
dst = appendTwoDigits(dst, offsetMinutes%60)
return dst
}
```
Ref: https://cs.opensource.google/go/go/+/refs/tags/go1.23.2:src/encoding/asn1/marshal.go;l=405-448
Based on the blame, this offset encoding has been present since at least 2011 and possibly earlier.
### What did you expect to see?
We expected `encoding/asn1` to produce only valid DER encodings, which in this case means producing a `GeneralizedTime` with only a `Z` timezone component, and no relative timezone offsets.
To achieve this, we _believe_ the `Marshal` implementation can be tweaked to call `UTC()` before performing encoding, which would normalize the `time.Time` into UTC form. The special-casing around relative offsets could then be removed entirely, as all encoded times would be UTC.
Similarly, we believe (but haven't concretely observed) that `encoding/asn1`'s `Unmarshal` accepts invalid DER encodings of `GeneralizedTime`s, per its format string:
```go
// parseGeneralizedTime parses the GeneralizedTime from the given byte slice
// and returns the resulting time.
func parseGeneralizedTime(bytes []byte) (ret time.Time, err error) {
const formatStr = "20060102150405.999999999Z0700"
s := string(bytes)
if ret, err = time.Parse(formatStr, s); err != nil {
return
}
if serialized := ret.Format(formatStr); serialized != s {
err = fmt.Errorf("asn1: time did not serialize back to the original value and may be invalid: given %q, but serialized as %q", s, serialized)
}
return
}
```
Ref: https://cs.opensource.google/go/go/+/refs/tags/go1.23.2:src/encoding/asn1/asn1.go;l=368-383
If our understanding of `time.Parse` is correct, this will admit multiple invalid DER encodings:
* The fractional component `.999999999` will allow trailing zeroes, which are allowed in BER but not in DER;
* The timezone component `Z0700` allows both `Z` and relative timezone offsets, when only `Z` is allowed in DER. | NeedsInvestigation | low | Critical |
2,566,779,976 | pytorch | cuDNN dBias error starting on 10/2 nightly | ### ๐ Describe the bug
We are seeing the following error during backward on torchtune's Llama 3.2 Vision models starting with the 10/2 nightly:
```
RuntimeError: cuDNN Frontend error: For cuDNN version below 9.5.0, dBias not support s_q/s_kv which aren't multiple of 64
```
A minimal repro is given by
```python
import torch
from torch import nn
class DummySDPA(nn.Module):
def __init__(self):
super().__init__()
self.q = nn.Parameter(torch.randn(2, 32, 157, 128).to(device="cuda", dtype=torch.bfloat16))
self.k = nn.Parameter(torch.randn(2, 32, 6404, 128).to(device="cuda", dtype=torch.torch.bfloat16))
self.v = nn.Parameter(torch.randn(2, 32, 6404, 128).to(device="cuda", dtype=torch.bfloat16))
def forward(self, mask):
return nn.functional.scaled_dot_product_attention(
self.q,
self.k,
self.v,
attn_mask=mask,
dropout_p=0.0,
is_causal=False,
)
def main():
mask = torch.randint(0, 2, (2, 1, 157, 6404)).to(device="cuda", dtype=torch.bool)
model = DummySDPA()
out = model(mask)
loss = out.sum()
loss.backward()
if __name__ == "__main__":
main()
```
This script passes on 10/1 nightlies and fails on 10/2 nightlies. Looks like #136920 updated the cuDNN frontend to a version that added this check ([ref](https://github.com/NVIDIA/cudnn-frontend/blame/de355c7094af70467f2b264f531ab5c5f4401c42/include/cudnn_frontend/node/scaled_dot_product_flash_attention.h#L873-L875)). It was pointed out by @pbontrager that the dimension `s_q` and `s_kv` is referring to is the sequence length dimension. In general I'm not sure why we would constrain that to be a multiple of 64.
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241002+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk7_zion_6511_gd766966f605a-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 99%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241002+cu124
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241002+cu124 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @csarofeen @ptrblck @xwang233 @eqy @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | high priority,module: cudnn,triaged,module: regression,module: sdpa | low | Critical |
2,566,824,787 | vscode | Allow grouping of `package.nls*.json` files in a custom folder | ### Description:
Currently, the `package.nls.json` and `package.nls.{locale}.json` files are expected to be located in the root of the project for VSCode to properly detect and apply translations. However, for projects with multiple locales, this can result in a large number of translation files cluttering the root directory.
### Feature Request:
I'd like to propose a feature that allows grouping all `package.nls*.json` files inside a custom folder (e.g., `packages-nls`). The `package.json` would still reference these files in the usual way, but with the option to define their location, similar to how the `l10n` folder is currently used for `bundle.l10n.json`.
### Example:
Instead of placing all translation files in the root, we could organize them as follows:
```bash
PROJECT_NAME
โโโ l10n
โ โโโ bundle.l10n.json
โโโ packages-nls
โ โโโ package.nls.json
โ โโโ package.nls.es.json
โ โโโ package.nls.de.json
โโโ src
โ โโโ extension.ts
โโโ package.json
```
And in the `package.json`, we could still use translation keys like this:
```json
{
"name": "my-extension",
"version": "0.0.1",
"main": "./out/extension.js",
"l10n": "./l10n",
"nls": "./packages-nls",
"contributes": {
"commands": [
{
"command": "my-extension.helloWorld",
"title": "%my-extension.helloWorld.title%"
}
]
}
}
```
This way, developers could keep their root directory cleaner while still providing translation files.
### Current Behavior:
If the `package.nls*.json` files are moved to a custom folder, VSCode does not recognize keys and extension shows `%my-extension.helloWorld.title%` | feature-request,l10n-platform | low | Minor |
2,566,842,573 | kubernetes | PVC with a non-empty selector canโt have a PV dynamically provisioned | ### What happened?
This issue is spun off from #57878 - I can't specify a label selector on a PVC without first provisioning the PV with labels; I want to use dynamic provisioning and apply labels to the PV from the PVC. As I read in the above issue, there might be problems with doing this via `selector` so I thought to open this to request a new field for setting PV metadata labels and annotations from the PVC.
### What did you expect to happen?
I expected that in the 6 years since the previous issue was opened, this would be improved.
### How can we reproduce it (as minimally and precisely as possible)?
Try to provision a volume with dynamic provisioning by applying a PVC with selector labels specified. It fails.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
Client Version: v1.30.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.4
```
</details>
### Cloud provider
<details>
Azure
</details>
### OS version
<details>
MacOS Sonoma 14.6.1
```console
Darwin WCG-MN64J410FL 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020 arm64
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Major |
2,566,862,020 | tauri | [bug] pkg-config can't find libsoup via nix | ### Describe the bug
Building fails:
```
pnpm tauri build
> terratestlogviewerv3@0.1.0 tauri /home/salmon/git/TerratestLogViewerV3
> tauri "build"
Running beforeBuildCommand `pnpm build`
> terratestlogviewerv3@0.1.0 build /home/salmon/git/TerratestLogViewerV3
> vite build
vite v5.4.8 building SSR bundle for production...
โ 94 modules transformed.
vite v5.4.8 building for production...
โ 67 modules transformed.
.svelte-kit/output/client/_app/version.json 0.03 kB โ gzip: 0.05 kB
.svelte-kit/output/client/.vite/manifest.json 2.57 kB โ gzip: 0.52 kB
.svelte-kit/output/client/_app/immutable/assets/2.Cl62EbOl.css 1.69 kB โ gzip: 0.68 kB
.svelte-kit/output/client/_app/immutable/entry/start.CBs8aynU.js 0.07 kB โ gzip: 0.08 kB
.svelte-kit/output/client/_app/immutable/nodes/0.tabcKxng.js 0.74 kB โ gzip: 0.48 kB
.svelte-kit/output/client/_app/immutable/nodes/1.0WP_iUaH.js 1.02 kB โ gzip: 0.59 kB
.svelte-kit/output/client/_app/immutable/chunks/scheduler.BvLojk_z.js 2.16 kB โ gzip: 1.02 kB
.svelte-kit/output/client/_app/immutable/nodes/2.BPurbqBj.js 2.47 kB โ gzip: 1.18 kB
.svelte-kit/output/client/_app/immutable/chunks/index.BKQmPpam.js 5.64 kB โ gzip: 2.39 kB
.svelte-kit/output/client/_app/immutable/entry/app.c75_ySdt.js 6.13 kB โ gzip: 2.52 kB
.svelte-kit/output/client/_app/immutable/chunks/entry.Bzp4lb2o.js 28.41 kB โ gzip: 11.21 kB
โ built in 176ms
.svelte-kit/output/server/.vite/manifest.json 2.67 kB
.svelte-kit/output/server/_app/immutable/assets/_page.Cl62EbOl.css 1.69 kB
.svelte-kit/output/server/entries/pages/_layout.ts.js 0.07 kB
.svelte-kit/output/server/entries/fallbacks/layout.svelte.js 0.24 kB
.svelte-kit/output/server/internal.js 0.31 kB
.svelte-kit/output/server/entries/fallbacks/error.svelte.js 1.16 kB
.svelte-kit/output/server/chunks/ssr.js 3.49 kB
.svelte-kit/output/server/chunks/exports.js 5.94 kB
.svelte-kit/output/server/chunks/internal.js 6.07 kB
.svelte-kit/output/server/entries/pages/_page.svelte.js 8.06 kB
.svelte-kit/output/server/index.js 118.32 kB
โ built in 874ms
Run npm run preview to preview your production build locally.
> Using @sveltejs/adapter-static
Wrote site to "build"
โ done
Compiling openssl-sys v0.9.103
Compiling glib-sys v0.18.1
Compiling gobject-sys v0.18.0
Compiling gio-sys v0.18.1
Compiling gdk-sys v0.18.0
Compiling cairo-sys-rs v0.18.2
Compiling gdk-pixbuf-sys v0.18.0
Compiling pango-sys v0.18.0
Compiling atk-sys v0.18.0
Compiling javascriptcore-rs-sys v1.1.1
Compiling soup3-sys v0.5.0
Compiling x11-dl v2.21.0
Compiling tauri-plugin-shell v2.0.1
Compiling terratestlogviewerv3 v0.1.0 (/home/salmon/git/TerratestLogViewerV3/src-tauri)
Compiling gtk-sys v0.18.0
Compiling gdkx11-sys v0.18.0
The following warnings were emitted during compilation:
warning: soup3-sys@0.5.0:
error: failed to run custom build command for `soup3-sys v0.5.0`
Caused by:
process didn't exit successfully: `/home/salmon/git/TerratestLogViewerV3/src-tauri/target/release/build/soup3-sys-8b282c1a5ed45d66/build-script-build` (exit status: 1)
--- stdout
cargo:rerun-if-env-changed=LIBSOUP_3.0_NO_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
cargo:rerun-if-env-changed=PKG_CONFIG_PATH
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
cargo:warning=
pkg-config exited with status code 1
> PKG_CONFIG_PATH=/home/salmon/.gvm/pkgsets/go1.22.1/global/overlay/lib/pkgconfig:/nix/store/6r6rzv2v3x8mh2ki391gvc9y9954yzlg-glib-2.80.4-dev/lib/pkgconfig:/nix/store/4paq4ph6r3jaas7rqy2iprk6dgvjh9mr-libsoup-3.4.4-dev/lib/pkgconfig:/nix/store/kpz0hy3c1pcmk76rimfz04mywzrymc0v-webkitgtk-2.46.0+abi=4.1-dev/lib/pkgconfig:/nix/store/l7zwbxzhabcrszp4f9kjvax9fy18mnba-at-spi2-core-2.52.0-dev/lib/pkgconfig:/nix/store/a9af132rd3pz7nzvb7znybgcw1vjwm0r-gtk+3-3.24.43-dev/lib/pkgconfig:/nix/store/samvjksx5s1fpfjxpb9c3c19zs7nr8r3-gdk-pixbuf-2.42.12-dev/lib/pkgconfig:/nix/store/kar6nif5yf3hb0s8dlyhlgrcm2zqg5bs-cairo-1.18.0-dev/lib/pkgconfig:/nix/store/l6kp2q6011lshqrj1jrin5kvkbjsng5c-pango-1.52.2-dev/lib/pkgconfig:/nix/store/ww1bzzir84hjvjdzgv374qj1yalh90qn-harfbuzz-9.0.0-dev/lib/pkgconfig:/nix/store/yamw9igrv93n5dhmlfhpkh03v6y620y1-librsvg-2.58.2-dev/lib/pkgconfig PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 pkg-config --libs --cflags libsoup-3.0 libsoup-3.0 >= 3.0
The system library `libsoup-3.0` required by crate `soup3-sys` was not found.
The file `libsoup-3.0.pc` needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory.
PKG_CONFIG_PATH contains the following:
- /home/salmon/.gvm/pkgsets/go1.22.1/global/overlay/lib/pkgconfig
- /nix/store/6r6rzv2v3x8mh2ki391gvc9y9954yzlg-glib-2.80.4-dev/lib/pkgconfig
- /nix/store/4paq4ph6r3jaas7rqy2iprk6dgvjh9mr-libsoup-3.4.4-dev/lib/pkgconfig
- /nix/store/kpz0hy3c1pcmk76rimfz04mywzrymc0v-webkitgtk-2.46.0+abi=4.1-dev/lib/pkgconfig
- /nix/store/l7zwbxzhabcrszp4f9kjvax9fy18mnba-at-spi2-core-2.52.0-dev/lib/pkgconfig
- /nix/store/a9af132rd3pz7nzvb7znybgcw1vjwm0r-gtk+3-3.24.43-dev/lib/pkgconfig
- /nix/store/samvjksx5s1fpfjxpb9c3c19zs7nr8r3-gdk-pixbuf-2.42.12-dev/lib/pkgconfig
- /nix/store/kar6nif5yf3hb0s8dlyhlgrcm2zqg5bs-cairo-1.18.0-dev/lib/pkgconfig
- /nix/store/l6kp2q6011lshqrj1jrin5kvkbjsng5c-pango-1.52.2-dev/lib/pkgconfig
- /nix/store/ww1bzzir84hjvjdzgv374qj1yalh90qn-harfbuzz-9.0.0-dev/lib/pkgconfig
- /nix/store/yamw9igrv93n5dhmlfhpkh03v6y620y1-librsvg-2.58.2-dev/lib/pkgconfig
HINT: you may need to install a package such as libsoup-3.0, libsoup-3.0-dev or libsoup-3.0-devel.
warning: build failed, waiting for other jobs to finish...
failed to build app: failed to build app
Error failed to build app: failed to build app
โELIFECYCLEโ Command failed with exit code 1.
```
The build failed because `libsoup-3.0.pc` was not found. However, we can see that `/nix/store/4paq4ph6r3jaas7rqy2iprk6dgvjh9mr-libsoup-3.4.4-dev/lib/pkgconfig` is in `PKG_CONFIG_PATH`. Looking in that directory, we can see `libsoup-3.0.pc` is actually available:
```
ls -lsh /nix/store/4paq4ph6r3jaas7rqy2iprk6dgvjh9mr-libsoup-3.4.4-dev/lib/pkgconfig
total 4.0K
4.0K -r--r--r-- 2 root root 516 Dec 31 1969 libsoup-3.0.pc
```
It is also worth noting that `pnpm tauri dev` runs without errors.
This problem might be related to #11077, which seems like a similar issue but with a different dependency.
### Reproduction
Make a new project `cargo create-tauri-app`.
Create `flake.nix`:
```nix
{
description = "build and development environment";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
flake-utils.url = "github:numtide/flake-utils";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
};
};
};
outputs = {
self,
nixpkgs,
flake-utils,
rust-overlay,
...
}:
flake-utils.lib.eachDefaultSystem (system: let
overlays = [
(import rust-overlay)
];
pkgs = import nixpkgs {inherit overlays system;};
rustVersion = pkgs.rust-bin.fromRustupToolchainFile ./rust-toolchain;
rust-toolchain = rustVersion.override {
extensions = ["rust-analyzer" "rust-src"];
};
in
with pkgs; rec {
devShells.default = mkShell {
packages = [
alejandra
nodejs_22
pnpm
rust-toolchain
# tauri build deps
openssl
at-spi2-atk
atkmm
cairo
gdk-pixbuf
glib
gobject-introspection
gobject-introspection.dev
gtk3
harfbuzz
librsvg
libsoup_3
pango
webkitgtk_4_1
webkitgtk_4_1.dev
];
PKG_CONFIG_PATH = "${glib.dev}/lib/pkgconfig:${libsoup_3.dev}/lib/pkgconfig:${webkitgtk_4_1.dev}/lib/pkgconfig:${at-spi2-atk.dev}/lib/pkgconfig:${gtk3.dev}/lib/pkgconfig:${gdk-pixbuf.dev}/lib/pkgconfig:${cairo.dev}/lib/pkgconfig:${pango.dev}/lib/pkgconfig:${harfbuzz.dev}/lib/pkgconfig:${pkgs.librsvg.dev}/lib/pkgconfig";
};
formatter = alejandra;
});
}
```
Run `nix develop` and `pnpm tauri build`.
### Expected behavior
The libsoup dependency is present, so I expect the build to complete without errors.
### Full `tauri info` output
```text
[โ] Environment
- OS: Ubuntu 24.4.0 x86_64 (X64)
โ webkit2gtk-4.1: 2.46.0
โ rsvg2: 2.58.2
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (overridden by '/home/salmon/git/TerratestLogViewerV3/rust-toolchain')
- node: 22.8.0
- pnpm: 9.10.0
- npm: 10.8.2
[-] Packages
- tauri ๐ฆ: 2.0.1
- tauri-build ๐ฆ: 2.0.1
- wry ๐ฆ: 0.44.1
- tao ๐ฆ: 0.30.3
- @tauri-apps/api ๎: 2.0.1
- @tauri-apps/cli ๎: 2.0.1
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.1
- @tauri-apps/plugin-shell ๎: 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
This is built inside WSL2 running Ubuntu 24.04.1 LTS (5.15.153.1-microsoft-standard-WSL2). | type: bug,platform: Linux,status: needs triage,platform: Nix/NixOS | low | Critical |
2,566,928,677 | godot | Godot DAP "Completions Request" from the DAP spec is not implemented | ### Tested versions
Reproducible on master after https://github.com/godotengine/godot/pull/97585 is merged, because the ability to evaluate expressions in debugger was introduced.
### System information
All OS
### Issue description
"Completions Request" from the DAP [spec](https://microsoft.github.io/debug-adapter-protocol//specification.html) is not implemented
### Steps to reproduce
Start debugging with DAP in vscode.
Stop on a brakepoint
In the Watch area try to type `self.` and there would be no completion.
### Minimal reproduction project (MRP)
Empty project with a scene and one gd script is enough. | enhancement,topic:gdscript,topic:editor | low | Critical |
2,566,947,176 | flutter | [google_maps_flutter][iOS]: Tile overlays not rendering correct color for partially transparent solid PNG tiles | ### Steps to reproduce
1. Create a `TileOverlay` in Flutter iOS
2. Supply single-colored PNG tile with a transparency as all tiles
For example, use this as the tile:

### Expected results
The tiles are rendered on the map using the correct color and transparency.
For example, I expect this to be the rendered map from the above tile:

### Actual results
The tiles are all the wrong color. They display as grey. This is the rendered map from the above tile:

### Code sample
See my [example project here](https://github.com/martyfuhry/google-maps-ios-tiles/blob/tile-ios-broken/packages/google_maps_flutter/google_maps_flutter/example/lib/tile_overlay.dart)
<details open><summary>Code sample</summary>
```dart
// Copyright 2013 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// ignore_for_file: public_member_api_docs
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:google_maps_flutter/google_maps_flutter.dart';
import 'page.dart';
class TileOverlayPage extends GoogleMapExampleAppPage {
const TileOverlayPage({Key? key})
: super(const Icon(Icons.map), 'Tile overlay', key: key);
@override
Widget build(BuildContext context) {
return const TileOverlayBody();
}
}
class TileOverlayBody extends StatefulWidget {
const TileOverlayBody({super.key});
@override
State<TileOverlayBody> createState() => _TileOverlayBodyState();
}
class _TileOverlayBodyState extends State<TileOverlayBody> {
final ValueNotifier<TileOverlay?> _tileOverlay =
ValueNotifier<TileOverlay?>(null);
@override
Widget build(BuildContext context) {
return ValueListenableBuilder<TileOverlay?>(
valueListenable: _tileOverlay,
builder: (BuildContext context, TileOverlay? overlay, Widget? child) {
final Set<TileOverlay> overlays =
overlay == null ? <TileOverlay>{} : <TileOverlay>{overlay};
return Column(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.spaceEvenly,
crossAxisAlignment: CrossAxisAlignment.stretch,
children: <Widget>[
Center(
child: SizedBox(
width: 350.0,
height: 300.0,
child: GoogleMap(
initialCameraPosition: const CameraPosition(
target: LatLng(59.935460, 30.325177),
zoom: 7.0,
),
tileOverlays: overlays,
),
),
),
TextButton.icon(
icon: Image.asset('assets/working_transparent_tile.png', height: 50,),
onPressed: () {
_tileOverlay.value =
_getOverlay('assets/working_transparent_tile.png');
},
label: const Text(
'Use shaped transparent',
),
),
TextButton.icon(
icon: Image.asset('assets/not_working_transparent_tile.png', height: 50,),
onPressed: () {
_tileOverlay.value =
_getOverlay('assets/not_working_transparent_tile.png');
},
label: const Text(
'Use solid (BROKEN)',
),
),
TextButton.icon(
icon: Image.asset('assets/working_opaque_tile.png', height: 50,),
onPressed: () {
_tileOverlay.value = _getOverlay(
'assets/working_opaque_tile.png',
transparency: 0.75,
);
},
label: const Text(
'Use shaped opaque with 75% transparency',
),
),
TextButton.icon(
icon: Image.asset('assets/not_working_opaque_tile.png', height: 50,),
onPressed: () {
_tileOverlay.value = _getOverlay(
'assets/not_working_opaque_tile.png',
transparency: 0.75,
);
},
label: const Text(
'Use solid opaque with 75% transpancy',
),
),
],
);
},
);
}
TileOverlay _getOverlay(String filePath, {double transparency = 0}) {
return TileOverlay(
tileOverlayId: TileOverlayId('$transparency$filePath'),
tileProvider: _ImageTileProvider(filePath),
transparency: transparency,
);
}
}
class _ImageTileProvider implements TileProvider {
const _ImageTileProvider(this.imagePath);
final String imagePath;
@override
Future<Tile> getTile(int x, int y, int? zoom) async {
final ByteData tileImage = await rootBundle.load(imagePath);
return Tile(512, 512, tileImage.buffer.asUint8List());
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
**Shaped transparent**

**Solid (BROKEN)**

**Shaped opaque with 75% transparency**

**Solid opaque with 75% transparency**

You can see above that supplying a non-transparent image and then using `transparency: 0.75` in the `TileOverlay` gives me the correct behavior. Simply using a transparent image gives me the broken behavior above.
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.0, on macOS 14.5 23F79 darwin-x64, locale en-US)
โข Flutter version 3.24.0 on channel stable at /Users/martyfuhry/fvm/versions/3.24.0
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 80c2e84975 (9 weeks ago), 2024-07-30 23:06:49 +0700
โข Engine revision b8800d88be
โข Dart version 3.5.0
โข DevTools version 2.37.2
[!] Android toolchain - develop for Android devices (Android SDK version 30.0.3)
โข Android SDK at /Users/martyfuhry/Library/Android/sdk
โข Platform android-33, build-tools 30.0.3
โข Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
โ Could not determine java version
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.0.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15A507
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 3.6)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin version 49.0.1
โข Dart plugin version 192.8052
โข Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
[โ] Connected device (4 available)
โข Solanus the iPhone (2) (mobile) โข 00008030-00044D462189802E โข ios โข iOS 17.6.1 21G93
โข iPhone 15 (mobile) โข 5599814B-93C5-42B9-B2D4-BD26E74E856A โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-17-0 (simulator)
โข macOS (desktop) โข macos โข darwin-x64 โข macOS 14.5 23F79 darwin-x64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.90
! Error: Browsing on the local area network for Solanus the iPhone. Ensure the device is unlocked and attached with a cable or associated with the same
local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-ios,p: maps,package,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.26 | low | Critical |
2,566,959,752 | PowerToys | Complete setting backup for restore to new devices | ### Description of the new feature / enhancement
To be able to backup settings, workspaces, and tool options for restore on a new computer.
### Scenario when this would be used?
Computer1 is configured with Workspaces and FanzyZones locations. A backup is taken and restored on Computer2.
On Computer2 launching the Workspace or attempting the snap to FancyZones with no further interaction with PowerToys should act the same as Computer1.
### Supporting information
Currently backups do not bring over the workspaces, which seem to be store here by default:
c:\Users\USER\AppData\Local\Microsoft\PowerToys\Workspaces\workspaces.json
Backups do bring over FancyZones but do not select the same one that was selected when the backup was taken. | Issue-Bug,Product-Settings,Resolution-Fix Committed,Product-Workspaces | low | Minor |
2,566,973,764 | material-ui | [mui-6] CssVarsProvider createTheme palette.mode ignores existing colorSchemes | ### Summary
I would like createTheme to use existing colorScheme definitions when initialized with a mode and simply apply them to the theme, as opposed to generating a default and applying it to the theme. I'd like this to only generate a default is a suitable colorScheme had not already been provided.
### Examples
My Theme looks something like this:
```json
{
"name": "Theme Example",
"colorSchemes": {
"light": {
"palette": {
"primary": {
"light": "var(--ns-palette-light-primary-light)",
"main": "var(--ns-palette-light-primary-main)",
"dark": "var(--ns-palette-light-primary-dark)",
"lightChannel": "var(--ns-palette-light-primary-light-channel)",
"mainChannel": "var(--ns-palette-light-primary-main-channel)",
"darkChannel": "var(--ns-palette-light-primary-dark-channel)",
"contrastText": "var(--ns-palette-light-primary-contrast-text)"
},
...
}
},
"dark": {
"palette": {
"primary": {
"light": "var(--ns-palette-dark-primary-light)",
"main": "var(--ns-palette-dark-primary-main)",
"dark": "var(--ns-palette-dark-primary-dark)",
"lightChannel": "var(--ns-palette-dark-primary-light-channel)",
"mainChannel": "var(--ns-palette-dark-primary-main-channel)",
"darkChannel": "var(--ns-palette-dark-primary-dark-channel)",
"contrastText": "var(--ns-palette-dark-primary-contrast-text)"
},
...
}
},
...
"cssVariables": {
"colorSchemeSelector": "class"
},
"palette": {}
}
```
This is heavily trimmed, but it should give the context. And yes, the theme itself pulls in css variables for the underlying primitives from our design system, and whilst it gets somewhat indirect, it actually works very nicely.
Then I change color scheme to dark, the main theme (the one attached to the ThemeProvider), becomes:
```json
{
"defaultColorScheme": "light",
"name": "Dynamic Theme",
"colorSchemeSelector": "class",
"rootSelector": ":root",
"vars": {
"palette": {
"primary": {
"light": "var(--mui-palette-primary-light)",
"main": "var(--mui-palette-primary-main)",
"dark": "var(--mui-palette-primary-dark)",
"lightChannel": "var(--mui-palette-primary-lightChannel)",
"mainChannel": "var(--mui-palette-primary-mainChannel)",
"darkChannel": "var(--mui-palette-primary-darkChannel)",
"contrastText": "var(--mui-palette-primary-contrastText)",
"contrastTextChannel": "var(--mui-palette-primary-contrastTextChannel)"
},
...
},
...
},
"palette": {
"mode": "dark",
"primary": {
"light": "var(--ns-palette-dark-primary-light)",
"main": "var(--ns-palette-dark-primary-main)",
"dark": "var(--ns-palette-dark-primary-dark)",
"lightChannel": "var(--ns-palette-dark-primary-light-channel)",
"mainChannel": "var(--ns-palette-dark-primary-main-channel)",
"darkChannel": "var(--ns-palette-dark-primary-dark-channel)",
"contrastText": "var(--ns-palette-dark-primary-contrast-text)",
"contrastTextChannel": "var(--ns-palette-dark-primary-contrast-text)"
},
...
"colorSchemes": {
"light": {
"palette": {
"mode": "light",
"primary": {
"light": "var(--ns-palette-light-primary-light)",
"main": "var(--ns-palette-light-primary-main)",
"dark": "var(--ns-palette-light-primary-dark)",
"lightChannel": "var(--ns-palette-light-primary-light-channel)",
"mainChannel": "var(--ns-palette-light-primary-main-channel)",
"darkChannel": "var(--ns-palette-light-primary-dark-channel)",
"contrastText": "var(--ns-palette-light-primary-contrast-text)",
"contrastTextChannel": "var(--ns-palette-light-primary-contrast-text)"
},
...
},
"dark": {
"palette": {
"mode": "dark",
"primary": {
"light": "var(--ns-palette-dark-primary-light)",
"main": "var(--ns-palette-dark-primary-main)",
"dark": "var(--ns-palette-dark-primary-dark)",
"lightChannel": "var(--ns-palette-dark-primary-light-channel)",
"mainChannel": "var(--ns-palette-dark-primary-main-channel)",
"darkChannel": "var(--ns-palette-dark-primary-dark-channel)",
"contrastText": "var(--ns-palette-dark-primary-contrast-text)",
"contrastTextChannel": "var(--ns-palette-dark-primary-contrast-text)"
},
...
},
"cssVarPrefix": "mui"
}
```
With palette switching to dark as desired and referencing the underlying colorScheme.
If I just pass in mode = 'dark' to create them, I get this instead:
```json
{
"defaultColorScheme": "dark",
"name": "Theme Example",
"colorSchemeSelector": "class",
"rootSelector": ":root",
"vars": {
"palette": {
"primary": {
"light": "var(--mui-palette-primary-light, #e3f2fd)",
"main": "var(--mui-palette-primary-main, #90caf9)",
"dark": "var(--mui-palette-primary-dark, #42a5f5)",
"lightChannel": "var(--mui-palette-primary-lightChannel, 227 242 253)",
"mainChannel": "var(--mui-palette-primary-mainChannel, 144 202 249)",
"darkChannel": "var(--mui-palette-primary-darkChannel, 66 165 245)",
"contrastText": "var(--mui-palette-primary-contrastText, rgba(0, 0, 0, 0.87))",
"contrastTextChannel": "var(--mui-palette-primary-contrastTextChannel, 0 0 0)"
},
...
},
},
"palette": {
"mode": "dark",
"primary": {
"main": "#90caf9",
"light": "#e3f2fd",
"dark": "#42a5f5",
"contrastText": "rgba(0, 0, 0, 0.87)",
"mainChannel": "144 202 249",
"lightChannel": "227 242 253",
"darkChannel": "66 165 245",
"contrastTextChannel": "0 0 0"
},
...
"colorSchemes": {
"dark": {
"palette": {
"mode": "dark",
"primary": {
"main": "#90caf9",
"light": "#e3f2fd",
"dark": "#42a5f5",
"contrastText": "rgba(0, 0, 0, 0.87)",
"mainChannel": "144 202 249",
"lightChannel": "227 242 253",
"darkChannel": "66 165 245",
"contrastTextChannel": "0 0 0"
},
...
},
"light": {
"palette": {
"mode": "light",
"primary": {
"light": "var(--ns-palette-light-primary-light)",
"main": "var(--ns-palette-light-primary-main)",
"dark": "var(--ns-palette-light-primary-dark)",
"lightChannel": "var(--ns-palette-light-primary-light-channel)",
"mainChannel": "var(--ns-palette-light-primary-main-channel)",
"darkChannel": "var(--ns-palette-light-primary-dark-channel)",
"contrastText": "var(--ns-palette-light-primary-contrast-text)",
"contrastTextChannel": "var(--ns-palette-light-primary-contrast-text)"
},
...
},
"cssVarPrefix": "mui",
}
```
Whatever theme I pass in as the mode gets overwritten.
Now, I do understand that this is [probably intentional, but it would be awfully useful if this overwriting of the colorScheme only happened when there wasn't already a colorScheme defined. That would make this much more flexible, and to my mind at least, more consistent
### Motivation
I'm trying to display a theme inspector - similar to the mui 6 one, where I create a theme using cssVariables, and I can toggle between light and dark modes of the theme without changing the theme that's applied to the application.
I want some way to create a theme object that can be inspected in both light and dark modes that isn't tied to the DOM, as far as I can tell from digging into useCurrentColorScheme.js, there's no way to get a theme in a specific mode without it trying to edit the DOM. I want to be able to inspect the css variables, not apply them.
The closest is to use `createTheme` with palette.mode set to either light or dark, however that also initializes a default color scheme for that mode ignoring anything that you might already have set and then applying that scheme to the palette. This works as desired if you are using an autogenerated theme, but if you are trying to migrate from an existing theme and have a rather large set of values that you want to preserve, it ignores and overwrites them.
If I just display the main application theme, I can see it switch between light and dark mode and update the palette and it's working beautifully, I can inspect that value and get exactly the behavior that I'm after, but it's changing the application theme.
I want the same change to a theme but without tying it to a ThemeProvider or having it try and update the DOM.
**Search keywords**: CssVarProvider | status: expected behavior,customization: theme | low | Major |
2,567,076,857 | flutter | Refactor and document how to add support for more test actions to `et` | @reidbaker rightly points out in https://github.com/flutter/engine/pull/55638#pullrequestreview-2348773329 that how to encode actions so that `et` interprets them as testable executables is not documented (true, because the feature didn't exist until https://github.com/flutter/engine/pull/55638).
After https://github.com/flutter/engine/pull/55638 is merged, let's refactor a tad and document:
- Let's `s/actions = ["dart test"]/actions = ["et_test_executable"]` or something instead
- How to add support for new actions that are interpreted as tests (i.e. for iOS, Java, what have you)
- What type of API is expected by the wrapped action (I believe today it's just exit code 0/1) | P2,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Minor |
2,567,086,772 | next.js | React Spring on development server not working on initial load | ### Link to the code that reproduces this issue
https://github.com/gdapps-studio/nextjs-minimal-reproduction-react-spring
### To Reproduce
1. `pnpm install`
2. `pnpm dev`
3. no content on initial load
### Current vs. Expected behavior
Following the reproduction steps, you will expect seeing the content, but actually you won't see anything on initial load.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: 9.4.0
Relevant Packages:
next: 14.2.14 // Latest available version is detected (14.2.14).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug | low | Minor |
2,567,102,320 | deno | Panic on inconsistent exports in package.json | Version: Deno 2.0.0-rc.10
Steps to reproduce:
1. Create a dir with a `package.json` containing `{ "exports": { ".": "./a", "a": "./a" } }`
2. Run `RUST_BACKTRACE=1 deno task`
Expected result: Some reasonable error message.
Result:
```
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux x86_64
Version: 2.0.0-rc.10
Args: ["deno", "task"]
thread 'main' panicked at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/deno_package_json-0.1.2/src/lib.rs:401:7:
"exports" cannot contains some keys starting with '.' and some not.
The exports object must either be an object of package subpath keys
or an object of main entry condition name keys only.
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: deno_package_json::PackageJson::load_from_value
3: deno_package_json::PackageJson::load_from_path
4: deno_config::workspace::discovery::discover_workspace_config_files_for_single_dir::{{closure}}
5: deno_config::workspace::discovery::discover_workspace_config_files_for_single_dir
6: deno_config::workspace::WorkspaceDirectory::discover
7: deno::args::CliOptions::from_flags
8: deno::factory::CliFactory::cli_options
9: deno::tools::task::execute_script::{{closure}}
10: deno::spawn_subcommand::{{closure}}
11: <deno_unsync::tokio::task::MaskFutureAsSend<F> as core::future::future::Future>::poll
12: tokio::runtime::task::raw::poll
13: deno::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
``` | bug,node compat | low | Critical |
2,567,113,614 | flutter | Monorepo needs something like "visibility check" | One "benefit" of the current `flutter/flutter` and `flutter/engine` structure is that `flutter/flutter` can't depend on artifacts from `flutter/engine` that are not explicitly exported as part of the build, and `flutter/engine` can't depend on anything from `flutter/flutter` (at least not in a normal way). Of course, that is also a _downside_, for example `flutter/engine` regularly needs to reproduce working tools and infrastructure (Skia Gold, Scenario App, etc) that have working solutions in `flutter/flutter`.
_However_, we won't unconditionally want to share code and create a giant spaghetti ball of dependencies (imagine [`flutter/engine/tools`](https://github.com/flutter/engine/tree/main/tools) and [`flutter/flutter/dev`](https://github.com/flutter/flutter/tree/master/dev) getting mixed-up together), and wouldn't want the framework to start introspecting on and getting handles of non-artifacts from the engine that aren't explicitly meant to be public.
---
We will need _something_ like a visibility or access system. Some (not conclusive) options:
## Informal
At the very least, we should document what directories are expected to import what other directories.
That way:
- We have a source of truth on the intent, and can use it in code review or escalations
- There is a reference point if we want to improve tooling
- There is a place for folks to propose changes to the rules in an orderly manner
I imagine the rules being very broad at first, for example something like:
```md
<!-- v0 -->
- `engine/` cannot use or depend on anything outside of `engine/` other than PATH binaries (e.g. `flutter`)
- code outside of the `engine/` cannot use or depend on anything inside of `engine/`
```
In the future we might expand the rules post refactoring. For example:
```md
<!-- v1 -->
- `engine/` cannot use or depend on anything outside of `engine/` other than PATH binaries (e.g. `flutter`) but _can_ depend on Dart packages listed in `internal/common`
```
## Best Effort w/ Bot tooling
The next step would be to invest in some tooling that, as a best effort, enforce the rules above.
We would go after obvious cases, that is, likely just enforce `pubspec.yaml` dependencies (assuming that lints are in place that require all dependencies to be listed in pubspec, I believe they are). In other words, the following could still happen:
```dart
import 'dart:io' as io;
void main() {
final naughtyFile = File('../engine/tools/file_i_want_to_read.txt');
}
```
## Best Effort w/ Bot tooling and CI prohibitions
We might decide it is impossibly difficult to encode all of the patterns and paths we want, or that we find that scripts keep making it into CI that reference files or directories they shouldn't, slowing down our progress. This might be technically difficult, but we could ensure that CI processes that run engine builds _literally_ do not have filesystem access to the framework folder, (or it's put in a random directory with only the `flutter` CLI working), and vica-versa for flutter/flutter and the engine.
I haven't thought too much about this, it's definitely possible - we'd want to exhaust other options first.
## Sandboxing
The most complete way to do this is to enforce build and test sandboxing, i.e. with _something like_:
<https://bazel.build/docs/sandboxing>
However, I'll leave that as out of scope at the moment. | team-infra,P2,triaged-infra,monorepo | low | Major |
2,567,126,807 | go | x/website: publish sitemaps | go.dev has many pages. Publish sitemaps to help search engines and crawlers discover contents.
Related: https://github.com/golang/go/issues/69600
Sitemaps: https://developers.google.com/search/docs/crawling-indexing/sitemaps/overview
go.dev endpoints: https://github.com/golang/website/blob/5f6954e6fcc9468bce96e01d8ebf374e37043a8c/cmd/golangorg/server.go#L153
| NeedsInvestigation,website | low | Minor |
2,567,139,665 | flutter | Proposal: `run_tests.py` becomes "TAP", delegates to `et test` to find/run tests | Today, the `flutter/engine` repository (soon to become the `engine` _folder_ in a merged repo) uses `./testing/run_tests.py` to (a) collect, (b) configure, and (c) run (and report on the results of) test executables across the repository. It will be non-trivial to replace this with another tool (i.e. `et test`).
My proposal is to move _elements_ from `run_tests.py` to `et`, but keeping `run_tests.py` as the entrypoint (i.e. for CI) until all behavior is either emulated or sufficiently completed in `et` respectively. Even once we do that, we will need a way to create lists of tests that run (i.e. TAP in google3), and this could continue to be `run_tests.py`'s area of ownership until we decide on another format.
In other words, doing something like:
```sh
./testing/run_tests.py --variant host_debug_unopt_arm64 --type engine
```
Is still _useful_, even with a fully functional `et test, because the collection of tests that make up `--type engine` (and potential configuration they need to run, i.e. conditionally wiring up SwiftShader and such) are not tasks that are going to be (directly) supported by `et test`.
We might end up _tweaking_ `run_tests.py` to have arguments similar to `et`, however:
```sh
./testing/run_tests.py --config host_debug_unopt_arm64 --type engine
``` | engine,c: proposal,P2,team-engine,triaged-engine,e: engine-tool | low | Critical |
2,567,150,387 | flutter | `et test`: Add a flag (enum?) for variants of logging output | @johnmccutchan in https://github.com/flutter/flutter/issues/156240#issuecomment-2394401776:
> et command line flag to enable test logging to be enabled (e.g. roboelectric tests throw away all log output by default).
This also reminds me of [`--test-summary`](https://bazel.build/docs/user-manual#test-summary) and [`--test-output`](https://bazel.build/docs/user-manual#test-output). | P2,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Minor |
2,567,154,073 | flutter | `et test`: Standard place to store test artifacts (i.e. stdout/stderr logs) | @johnmccutchan in https://github.com/flutter/flutter/issues/156240#issuecomment-2394401776:
> standard place for tests to store arbitrary artifacts as part of the run (stdout and stderr logs would be written here) | P2,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Minor |
2,567,156,661 | flutter | `et test`: Run a test N number of times | @johnmccutchan in https://github.com/flutter/flutter/issues/156240#issuecomment-2394401776:
> et command to run a test in a loop | P2,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Minor |
2,567,157,333 | PowerToys | Workspaces removes Discord from saved Workspace after each Discord update | ### Microsoft PowerToys version
0.85.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
After each Discord update Workspaces no longer finds Discord and says "application cannot be found" thus effectively removing Discord from a saved Workspace.
### โ๏ธ Expected Behavior
Discord should remain in the saved Workspace after an update.
### โ Actual Behavior
Workspaces cannot find Discord app and do not launch it.
### Other Software
Discord

| Issue-Bug,Needs-Triage,Product-Workspaces | low | Major |
2,567,180,514 | flutter | Default Animation Style Design Document | ### Document Link
[flutter.dev/go/default-animation-style](https://flutter.dev/go/default-animation-style)
### What problem are you solving?
I'm putting forward a proposal to add a default `AnimationStyle` inherited theme that will be used by all implicitly animated widgets to reduce boilerplate and make it easier to apply consistent animation configuration across an app. | framework,a: animation,P2,design doc,team-framework,triaged-framework,:scroll: | low | Minor |
2,567,205,500 | deno | createImageBitmap from blob [v.2.0.0-rc10] | **Version**: Deno 2.0.0-rc10
The Web Canvas API in Deno for [createImageBitmap](https://docs.deno.com/api/web/~/createImageBitmap) supports blobs. Added earlier this year in https://github.com/denoland/deno/pull/21898
### Observations
Currently seeing failures against standard jpg/pngs. The same code in a non-Deno environment, e.g. browser, works. Omission of jpg [seems intentional](https://github.com/denoland/deno/blob/2de4faa483982478e9a36ad4ab891a887b4779f1/ext/canvas/01_image.js#L235) from the deno library , but is worth bringing up as its widely supported in [web apis](https://developer.mozilla.org/en-US/docs/Web/API/Window/createImageBitmap)
```
async function loadImage(url) {
try{
const response = await fetch(url);
const blob = await response.blob();
const imageBitmap = await createImageBitmap(blob);
return imageBitmap;
}
catch (error) {
console.log(error)
}
}
```
```
const jpg = await loadImage("https://deno.com/images/artwork/deno_city.jpeg")
InvalidStateError: Unsupported type 'image/jpeg'
at ext:deno_canvas/01_image.js:236:15
at eventLoopTick (ext:core/01_core.js:175:7)
```
```
const png = await loadImage("https://deno.com/images/artwork/deno_news.png")
TypeError: Color type 'Rgb8' not supported
at ext:deno_canvas/01_image.js:241:50
at eventLoopTick (ext:core/01_core.js:175:7)
```
### Environment

| feat,web | low | Critical |
2,567,279,531 | pytorch | [CD] Docker images for Nightly. Allow rebuilding and retagging for Release | ### ๐ Describe the bug
Currently for OSS releases we tag images using this script when we do branch cut:
https://github.com/pytorch/pytorch/blob/main/scripts/release/tag-docker-images.sh
Instead we should allow Docker conda, libtorch and manywheel build on release branch on push the the files:
https://github.com/pytorch/pytorch/blob/main/.github/workflows/build-conda-images.yml
https://github.com/pytorch/pytorch/blob/main/.github/workflows/build-libtorch-images.yml
https://github.com/pytorch/pytorch/blob/main/.github/workflows/build-manywheel-images.yml
This will allow to cherry-pick changes to the release branch. And rebuild Docker images for release.
In Release 2.5 we had to implement workaround for this issue, but we need to avoid this in future:
https://github.com/pytorch/pytorch/pull/137148
https://github.com/pytorch/pytorch/pull/137177
### Versions
2.6.0 | oncall: releng,triaged | low | Critical |
2,567,282,569 | pytorch | [export] Dynamic shape torch-trt models fail on torch.export.load | ### ๐ Describe the bug
Torch-TensorRT model successfully compiles and can be saved as ExportedProgram. But it fails to load.
Here is the full error log
```py
WARNING:py.warnings:/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch_tensorrt/dynamo/_exporter.py:370: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
engine_node = gm.graph.get_attr(engine_name)
WARNING:py.warnings:/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/graph.py:1586: UserWarning: Node _run_on_acc_0_engine target _run_on_acc_0_engine _run_on_acc_0_engine of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
W1007 15:15:17.998000 397153 site-packages/torch/fx/experimental/symbolic_shapes.py:5124] failed during evaluate_expr(s0 >= 0, hint=True, size_oblivious=False, forcing_spec=False
E1007 15:15:17.999000 397153 site-packages/torch/fx/experimental/recording.py:298] failed while running evaluate_expr(*(s0 >= 0, True), **{'fx_node': False})
Traceback (most recent call last):
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch_tensorrt/_compile.py", line 434, in load
exp_program = torch.export.load(file_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/export/__init__.py", line 473, in load
ep = deserialize(artifact, expected_opset_version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 2437, in deserialize
.deserialize(
^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 2316, in deserialize
.deserialize(
^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1906, in deserialize
self.deserialize_graph(serialized_graph_module.graph)
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1612, in deserialize_graph
meta_val = self.deserialize_tensor_meta(tensor_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1579, in deserialize_tensor_meta
torch.empty_strided(
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1339, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1983, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 176, in constructors
r = func(*args, **new_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 479, in expect_size
r = b.expect_true(file, line)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 465, in expect_true
return self.guard_bool(file, line)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 449, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5122, in evaluate_expr
return self._evaluate_expr(orig_expr, hint, fx_node, size_oblivious, forcing_spec=forcing_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5198, in _evaluate_expr
static_expr = self._maybe_evaluate_static(expr,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1532, in wrapper
return fn_cache(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dperi/.pyenv/versions/3.11.7/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4479, in _maybe_evaluate_static
vr = var_ranges[k]
~~~~~~~~~~^^^
KeyError: s0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dperi/Downloads/TensorRT/test.py", line 28, in <module>
ep = torch_tensorrt.load("./trt.ep")
```
Here is the reproducer
Please install the following versions of libraries
```py
pip install torch torch_tensorrt --extra-index-url https://download.pytorch.org/whl/test/cu124
```
```py
import torch
import torch_tensorrt
import torchvision.models as models
model = models.resnet18().eval().cuda()
input = torch.randn((1, 3, 224, 224)).to("cuda")
compile_spec = {
"inputs": [
torch_tensorrt.Input(
min_shape=(1, 3, 224, 224),
opt_shape=(4, 3, 224, 224),
max_shape=(8, 3, 224, 224),
dtype=torch.float32,
name="x",
)
],
"ir": "dynamo",
"min_block_size": 1,
"cache_built_engines": False,
"reuse_cached_engines": False,
}
exp_program = torch_tensorrt.dynamo.trace(model, **compile_spec)
trt_module = torch_tensorrt.dynamo.compile(exp_program, **compile_spec)
torch_tensorrt.save(trt_module, "./trt.ep", inputs=[input])
ep = torch_tensorrt.load("./trt.ep")
```
cc: @angelayi
### Versions
[pip3] torch==2.6.0.dev20241004+cu124
[pip3] torch_tensorrt==2.6.0.dev0+52df589bb
[pip3] torchmetrics==1.4.0.post0
[pip3] torchprofile==0.0.4
[pip3] torchsurgeon==0.1.2
[pip3] torchvision==0.20.0.dev20241004+cu124
[pip3] triton==3.0.0
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,567,284,913 | pytorch | `lr`, `momentum`, `weight_decay` and `dampening` parameter of `optim.SGD()` work with `bool` values | ### ๐ Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) of `optim.SGD()` doesn't say that the type of `lr`, `momentum`, `weight_decay` and `dampening` parameter are `bool` as shown below:
> Parameters
> - ...
> - lr ([float](https://docs.python.org/3/library/functions.html#float), optional) โ learning rate (default: 1e-3)
> - momentum ([float](https://docs.python.org/3/library/functions.html#float), optional) โ momentum factor (default: 0)
> - weight_decay ([float](https://docs.python.org/3/library/functions.html#float), optional) โ weight decay (L2 penalty) (default: 0)
> - dampening ([float](https://docs.python.org/3/library/functions.html#float), optional) โ dampening for momentum (default: 0)
> - ...
But `lr`, `momentum`, `weight_decay` and `dampening` parameter work with `bool` values as shown below:
```python
from torch import nn
from torch import optim
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.linear_layer = nn.Linear(in_features=4, out_features=5)
def forward(self, x):
return self.linear_layer(x)
mymodel = MyModel()
sgd = optim.SGD(params=mymodel.parameters(), lr=True, momentum=True,
dampening=True, weight_decay=True)
sgd
# SGD (
# Parameter Group 0
# dampening: True
# differentiable: False
# foreach: None
# fused: None
# lr: True
# maximize: False
# momentum: True
# nesterov: False
# weight_decay: True
# )
```
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | module: optimizer,triaged | low | Critical |
2,567,285,358 | pytorch | Fused AdamW causing Illegal memory access on H100 for large models | ### ๐ Describe the bug
When training a large model on H100s, we are seeing an illegal memory access error when using AdamW `fused=True`. I suspect the root cause may be related to https://github.com/NVIDIA/apex/issues/1654 and https://github.com/pytorch/pytorch/issues/101449 (int32 issue). This issue has code to potentially reproduce: https://github.com/microsoft/DeepSpeed/issues/3429
### Versions
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.10.15 | packaged by conda-forge | (main, Sep 30 2024, 17:51:04) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.1.0-23-cloud-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.1+cu124
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.4.1+cu124
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.19.1+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.4.1+cu124 pypi_0 pypi
[conda] torch-complex 0.4.4 pypi_0 pypi
[conda] torchaudio 2.4.1+cu124 pypi_0 pypi
[conda] torchmetrics 1.4.2 pypi_0 pypi
[conda] torchvision 0.19.1+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @ptrblck @msaroufim | module: optimizer,module: cuda,triaged,module: 64-bit | low | Critical |
2,567,290,770 | transformers | Flash attention 2 support for PaliGemma model | ### Feature request
Hi,
Is it possible to enable flash attention for PaliGemma models?
### Motivation
This feature is required to speed up inference using PaliGemma VLMs
### Your contribution
If someone can point me to the steps required to do this I 'll be happy to help. Is it as simple as enabling [this flag?](https://github.com/huggingface/transformers/blob/main/src/transformers/models/paligemma/modeling_paligemma.py#L195) | Feature request,Flash Attention | low | Minor |
2,567,297,528 | pytorch | [CD] Implement a fix for: add +PTX to CUDA 12.4 nightly binaries | ### ๐ Describe the bug
Currently the way this was implemented:
https://github.com/pytorch/builder/pull/1932
Will apply to nightly and release builds. We need to add code similar to triton builds like this:
https://github.com/pytorch/builder/blob/main/conda/build_pytorch.sh#L288
```
if [[ -n "$OVERRIDE_PACKAGE_VERSION" && "$OVERRIDE_PACKAGE_VERSION" =~ .*dev.* ]]; then
...
```
To apply this change only to nightly builds. We need to apply it to nightly builds only since it increases size 200+ MB.
### Versions
2.6.0 | oncall: releng,triaged | low | Critical |
2,567,317,027 | godot | TileMap Editor: Switching between Select and Draw/Rect/Paint should switch modes. | ### Tested versions
4.3 stable
### System information
Win11, Godot 4.3 stable, Compatibility
### Issue description
I thought the TileMap editor was broken because it wouldn't draw when I pressed 'D' to switch to drawing from 'S' (Selection). I have to move my mouse down to the GUI and press the button, then I see the Eraser option show up and then I can draw on the TileMap.
### Steps to reproduce
- Enter "Select" mode in a TileMapLayer.
- Press the hotkey for "Draw" (D)
- Notice that you cannot draw or switch to the eraser
- Click the 'Draw' tool icon
- You can now draw and switch to the eraser.
### Minimal reproduction project (MRP)
pass | bug,topic:editor,topic:2d | low | Critical |
2,567,327,180 | pytorch | inductor shape padding of activations is bad for compiled autograd | min repro:
```
import torch
@torch.compile(fullgraph=True)
def f(x, y, z, w):
y = torch.ops.aten.addmm(x, y, z)
return y.view(-1).sin()
x = torch.randn(1308, requires_grad=True, device='cuda')
y = torch.randn(8, 256, requires_grad=True, device='cuda')
z = torch.randn(1308, 256, requires_grad=True, device='cuda').transpose(1, 0)
w = torch.randn(8, 1308, requires_grad=True, device='cuda')
with torch._dynamo.utils.maybe_enable_compiled_autograd(
True, fullgraph=True, dynamic=False
):
out = f(x, y, z, w)
out.sum().backward()
```
This fails during compiled autograd with:
```
File "/home/hirsheybar/local/c/pytorch/torch/_refs/__init__.py", line 4633, in view
return _reshape_view_helper(a, *shape, allow_copy=False)
File "/home/hirsheybar/local/c/pytorch/torch/_refs/__init__.py", line 3772, in _reshape_view_helper
raise ValueError(msg)
ValueError: Cannot view a tensor with shape torch.Size([s0, s2]) and strides (s3, 1) as a tensor with shape (s0*s2,)!
```
Here is the inductor-generated code for the forward graph: https://www.internalfb.com/intern/paste/P1630427936
The problem is that:
(1) the result of the `addmm()` is an activation (saved for backward), but is not a user-visible output
(2) inductor is therefore free to change its strides better matmul perf. It ends up doing this, creating an output buffer for the activation of shape/stride `buf0 = empty_strided_cuda((8, 1308), (1312, 1), torch.float32)`. (the padding is better for addmm)
(3) This makes the activation non-contiguous. This is a problem when compiled autograd traces the backward graph: it needs to call `activation.view(-1)` during fx tracing, which would only have been valid if the activation were contiguous.
Interestingly, this is only a problem with compiled autograd: without compiled autograd, we get to directly lower the `view(-1)` and never need to materialize it into an intermediate fx node.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @xmfan | triaged,oncall: pt2,module: inductor,module: compiled autograd | low | Critical |
2,567,335,210 | deno | `better-sqlite3` doesn't work | # WORKAROUND
If you don't mind WAL mode - use [`libsql`](https://www.npmjs.com/package/libsql) instead (it's compatible with `better-sqlite3` API):
``` shell
deno add npm:libsql-node
```
---
---
---
# Original issue
This issue was raised earlier in #18444 and #19130 but was closed as "fixed".
However, it's not.
``` shell
$ deno --version
deno 2.0.0-rc.10 (release candidate, release, x86_64-unknown-linux-gnu)
v8 12.9.202.13-rusty
typescript 5.6.2
$ deno eval "import Database from 'npm:better-sqlite3'; new Database(':memory:')"
error: Uncaught (in promise) Error: Could not locate the bindings file. Tried:
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/build/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/build/Debug/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/build/Release/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/out/Debug/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/Debug/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/out/Release/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/Release/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/build/default/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/compiled/20.11.1/linux/x64/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/addon-build/release/install-root/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/addon-build/debug/install-root/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/addon-build/default/install-root/better_sqlite3.node
โ /home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/lib/binding/node-v108-linux-x64/better_sqlite3.node
at bindings (file:///home/me/.cache/deno/npm/registry.npmjs.org/bindings/1.5.0/bindings.js:126:9)
at new Database (file:///home/me/.cache/deno/npm/registry.npmjs.org/better-sqlite3/11.3.0/lib/database.js:48:64)
at file:///home/me/$deno$eval.ts:1:44
``` | bug,node compat | low | Critical |
2,567,338,524 | pytorch | `dampening`, `maximize`, `foreach`, `differentiable` and `fused` parameter of `optim.SGD()` work with `str` values | ### ๐ Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) of `optim.SGD()` doesn't say that the type of `dampening`, `maximize`, `foreach`, `differentiable` and `fused` parameter is `str` as shown below:
> Parameters
> - ...
> - dampening ([float](https://docs.python.org/3/library/functions.html#float), optional) โ dampening for momentum (default: 0)
> - ...
> - maximize ([bool](https://docs.python.org/3/library/functions.html#bool), optional) โ maximize the objective with respect to the params, instead of minimizing (default: False)
> - foreach ([bool](https://docs.python.org/3/library/functions.html#bool), optional) โ whether foreach implementation of optimizer is used. If unspecified by the user (so foreach is None), we will try to use foreach over the for-loop implementation on CUDA, since it is usually significantly more performant. Note that the foreach implementation uses ~ sizeof(params) more peak memory than the for-loop version due to the intermediates being a tensorlist vs just one tensor. If memory is prohibitive, batch fewer parameters through the optimizer at a time or switch this flag to False (default: None)
> - differentiable ([bool](https://docs.python.org/3/library/functions.html#bool), optional) โ whether autograd should occur through the optimizer step in training. Otherwise, the step() function runs in a torch.no_grad() context. Setting to True can impair performance, so leave it False if you donโt intend to run autograd through this instance (default: False)
> - fused ([bool](https://docs.python.org/3/library/functions.html#bool), optional) โ whether the fused implementation is used. Currently, torch.float64, torch.float32, torch.float16, and torch.bfloat16 are supported. (default: None)
But `dampening`, `maximize`, `foreach`, `differentiable` and `fused` parameter work with `str` values as shown below:
```python
from torch import nn
from torch import optim
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.linear_layer = nn.Linear(in_features=4, out_features=5)
def forward(self, x):
return self.linear_layer(x)
mymodel = MyModel()
sgd = optim.SGD(params=mymodel.parameters(),
dampening='Hello', maximize='Hello',
foreach='Hello', differentiable='Hello')
sgd
# SGD (
# Parameter Group 0
# dampening: Hello
# differentiable: Hello
# foreach: Hello
# fused: None
# lr: 0.001
# maximize: Hello
# momentum: 0
# nesterov: False
# weight_decay: 0
# )
sgd = optim.SGD(params=mymodel.parameters(), fused='Hello')
sgd
# SGD (
# Parameter Group 0
# dampening: 0
# differentiable: False
# foreach: None
# fused: Hello
# lr: 0.001
# maximize: False
# momentum: 0
# nesterov: False
# weight_decay: 0
# )
```
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | module: optimizer,triaged | low | Critical |
2,567,354,141 | material-ui | [docs][material-ui][Menu] Selection is lost in context menu Demo (Safari only) | ### Search keywords
Safari Lose Focus after Menu
### Latest version
- [x] I have tested the latest version
### Steps to reproduce
Link to live example:
[mui](https://mui.com/material-ui/react-menu/#context-menu)
To reproduce, use the example. You need to select some text in the example before triggering the menu.
Steps:
1. Open mui Menu documentation page
2. Refer to the context menu sample in the page
3. Select a range of text
4. right click to display menu
5. chose a menu option
6. Bug is reproduced: The selection is lost.
Bug can only be reproduced in safari, and not in chrome
### Current behavior
Focus, and selection, is lost after menu is closed
### Expected behavior
The text selection should not be lost after the menu closes
Screenshot from Chrome

### Context
Mui documentation example gives the context
### Your environment
| bug ๐,docs,component: menu,package: material-ui,ready to take | low | Critical |
2,567,386,730 | deno | [Windows 10] 'await Deno.writeAll(Deno.stdout, bytes)' blocks code execution | On Windows/NT systems, if you run either CMD or PowerShell, and then click on any previous text in the console, the script execution is paused. Execution resumes only after pressing arrow down on the keyboard.
```ts
for(let i = 0; i < 10000; i++) {
for(let j = 0; j < 20000; j++) { for(let k = 0; k < 20000; k++) {} }
await Deno.writeAll(Deno.stdout, new TextEncoder().encode('\r' + i));
}
```
This does not happen on GNU/Linux systems, where you can continue to pipe to stdout, click anywhere and the script continues running.
Version: Deno 1.42.1
Windows 10
| bug,windows | medium | Critical |
2,567,447,454 | godot | Expanded polygon editor closes when selecting another tile | ### Tested versions
4.4 db66bd3
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description
Introduced in #95034
https://github.com/user-attachments/assets/71cce4ef-c388-4829-bcd7-6d29d12110df
Technically it's not a regression, because it fixed a worse issue, but it can still be improved.
More info from the PR:
> TileSet editor has an internal inspector that really likes getting refreshed. One instance is when you select another tile, another is when a polygon is added or removed:
https://github.com/godotengine/godot/blob/3978628c6cc1227250fc6ed45c8d854d24c30c30/scene/resources/2d/tile_set.cpp#L6271
> This causes all property editors to get destroyed and re-created.
There is one special editor though - the expanded editor. When you expand polygon editor (see https://github.com/godotengine/godot/pull/79512), it's moved to a different parent that isn't inside the inspector. So when inspector is cleared, the expanded editor survives it and holds outdated information.
> I don't have a good solution for that yet. Perfectly, when inspector is refreshed, the previous editor should be automatically expanded again. However the "previous" editor no longer exists and there can be multiple polygon editors, so it has to be somehow remembered per-property, idk.
### Steps to reproduce
1. Edit TileSet
2. Edit some tile's polygon
3. Expand the editor
4. Select another tile
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability,topic:2d | low | Minor |
2,567,447,535 | godot | Many properties in _get_property_list() slows down instantiate() by 2-8x. | ### Tested versions
- Reproducible in: v4.4.dev2.official [97ef3c837], v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - X11 - GLES3 (Compatibility) - AMD Radeon R9 200 Series (radeonsi, pitcairn, LLVM 17.0.6, DRM 3.57, 6.8.0-45-generic) - AMD FX(tm)-6300 Six-Core Processor (6 Threads)
### Issue description
When a script has a `_get_property_list()` method with many properties, then calling `load().instantiate()` is much slower (2-8x in my case) while testing from the editor.
**But** when the project is built, the lag is gone. So it's only in editor.
I feel the obvious solution is below, but it doesn't work.
```gdscript
func _get_property_list():
if not Engine.is_editor_hint():
return []
# Properties here...
```
### Steps to reproduce
- Create `"res://test_scene.tscn"` with this script on a node:
```gdscript
@tool
extends Node
var a := 0.0
var b := "b"
var c := [1, 2, 3, 4]
var d := false
var e := { x=false }
var f := Vector2.ZERO
var bool_a := false
var bool_b := true
var bool_c := true
var bool_d := false
var bool_e := false
var bool_f := true
var str_a := "a"
var str_b := "bb"
var str_c := "ccc"
var str_d := "dddd"
var str_e := "eeeee"
var str_f := "ffffff"
var arr_a := []
var arr_b := [1]
var arr_c := [false, true]
var arr_d := [[], [], [], []]
var arr_e := [{}, {}, {}, {}, {}]
var arr_f := ["f", "ff", "fff", "ffff", "fffff"]
var dict_a := {}
var dict_b := {x=1}
var dict_c := {a=1, b=2}
var dict_d := {a=true, b=true, c=true}
var dict_e := {x=[], y=false, c=true, e=Vector2.ZERO}
var dict_f := {a="a", b="b", c="c", d="d", e="e"}
var np_a := ^"../property_list"
var np_b := ^"../property_list"
var np_c := ^"../property_list"
var np_d := ^"../property_list"
var np_e := ^"../property_list"
var np_f := ^"../property_list"
func _get_property_list() -> Array[Dictionary]:
var props: Array[Dictionary]
if not Engine.is_editor_hint():
return props
props.append({ name="a", type=TYPE_FLOAT })
props.append({ name="b", type=TYPE_STRING })
props.append({ name="c", type=TYPE_ARRAY })
props.append({ name="e", type=TYPE_DICTIONARY })
props.append({ name="f", type=TYPE_VECTOR2 })
props.append({ name="Bool", type=TYPE_NIL, usage=PROPERTY_USAGE_GROUP, hint_string="bool_" })
props.append({ name="bool_a", type=TYPE_BOOL })
props.append({ name="bool_b", type=TYPE_BOOL })
props.append({ name="bool_c", type=TYPE_BOOL })
props.append({ name="bool_d", type=TYPE_BOOL })
props.append({ name="bool_e", type=TYPE_BOOL })
props.append({ name="bool_f", type=TYPE_BOOL })
props.append({ name="String", type=TYPE_NIL, usage=PROPERTY_USAGE_GROUP, hint_string="str_" })
props.append({ name="str_a", type=TYPE_STRING })
props.append({ name="str_b", type=TYPE_STRING })
props.append({ name="str_c", type=TYPE_STRING })
props.append({ name="str_d", type=TYPE_STRING })
props.append({ name="str_e", type=TYPE_STRING })
props.append({ name="str_f", type=TYPE_STRING })
props.append({ name="Array", type=TYPE_NIL, usage=PROPERTY_USAGE_GROUP, hint_string="arr_" })
props.append({ name="arr_a", type=TYPE_ARRAY })
props.append({ name="arr_b", type=TYPE_ARRAY })
props.append({ name="arr_c", type=TYPE_ARRAY })
props.append({ name="arr_d", type=TYPE_ARRAY })
props.append({ name="arr_e", type=TYPE_ARRAY })
props.append({ name="arr_f", type=TYPE_ARRAY })
props.append({ name="Dict", type=TYPE_NIL, usage=PROPERTY_USAGE_GROUP, hint_string="dict_" })
props.append({ name="dict_a", type=TYPE_DICTIONARY })
props.append({ name="dict_b", type=TYPE_DICTIONARY })
props.append({ name="dict_c", type=TYPE_DICTIONARY })
props.append({ name="dict_d", type=TYPE_DICTIONARY })
props.append({ name="dict_e", type=TYPE_DICTIONARY })
props.append({ name="dict_f", type=TYPE_DICTIONARY })
props.append({ name="NodePath", type=TYPE_NIL, usage=PROPERTY_USAGE_GROUP, hint_string="np_" })
props.append({ name="np_a", type=TYPE_NODE_PATH })
props.append({ name="np_b", type=TYPE_NODE_PATH })
props.append({ name="np_c", type=TYPE_NODE_PATH })
props.append({ name="np_d", type=TYPE_NODE_PATH })
props.append({ name="np_e", type=TYPE_NODE_PATH })
props.append({ name="np_f", type=TYPE_NODE_PATH })
return props
```
- From another scene, run this script.
```gdscript
func _ready():
var t1 := Time.get_ticks_msec()
for i in 1000:
var _node: Node = load("res://test_scene.tscn").instantiate()
var tt1 := Time.get_ticks_msec() - t1
prints("non: ", tt1)
```
- Now comment out the `_get_property_list()` and run again, it will be 2x faster.
### Minimal reproduction project (MRP)
[test_property_list.zip](https://github.com/user-attachments/files/17263876/test_property_list.zip)
| bug,topic:gdscript,topic:editor,performance | low | Major |
2,567,469,278 | tensorflow | Can't compile Tensorflow 2.17 from source for cpu on fedora 40 : undefined reference |
## I'm trying to compile Tensorflow 2.17 on a new fresh install of Fedora40 lxqt desktop (official spin).
#### what i've donne (all command as root):
- Fresh Fedora install
- dnf update
- reboot
- dnf install python3-devel g++ gcc cmake python3-pip git
eigen3-devel
- pip install -U --user pip
- pip install -U pip six numpy wheel setuptools mock
- wget -O bazel
https://github.com/bazelbuild/bazelisk/releases/download/v1.22.0/bazelisk-linux-amd64
- mv bazel /bin/
- chmod 555 /bin/bazel
- git clone https://github.com/tensorflow/tensorflow.git
- cd tensorflow
- git checkout r2.17
- export TF_PYTHON_VERSION=3.12
- export LD_LIBRARY_PATH=/usr/local/lib
- ./configure # <- answer default value but use GCC and say no to Cuda build, and
used this optimisation flags: -march=native -mtune=native -O3
- bazel build //tensorflow/tools/pip_package:wheel
--repo_env=WHEEL_NAME=tensorflow_cpu --config=nonccl --config=opt --action_env="LD_LIBRARY_PATH=${LD_LIBRARY_PATH}"
#### After a while a get this error:
```
ERROR: /home/brd/tensorflow/tensorflow/BUILD:1318:21: Linking tensorflow/libtensorflow_cc.so.2.17.1 failed: (Exit 1): gcc failed: error executing command (from target //tensorflow:libtensorflow_cc.so.2.17.1) /usr/bin/gcc @bazel-out/k8-opt/bin/tensorflow/libtensorflow_cc.so.2.17.1-2.params
/usr/bin/ld.gold: warning: bazel-out/k8-opt/bin/external/local_tsl/tsl/platform/cloud/_objs/gcs_file_system/gcs_file_system.pic.o: conflicting default version definition for _ZZZN3tsl17RamFileBlockCacheC4EmmmSt8functionIFN4absl12lts_202308026StatusERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEmmPcPmEEPNS_3EnvEENKUliPKcE0_clEiSK_E17vmodule_activated@@tensorflow
/usr/bin/ld.gold: bazel-out/k8-opt/bin/external/local_tsl/tsl/platform/cloud/_objs/gcs_file_system/gcs_file_system.pic.o: previous definition of _ZZZN3tsl17RamFileBlockCacheC4EmmmSt8functionIFN4absl12lts_202308026StatusERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEmmPcPmEEPNS_3EnvEENKUliPKcE0_clEiSK_E17vmodule_activated@@tensorflow here
bazel-out/k8-opt/bin/tensorflow/core/kernels/mkl/_objs/mkl_sparse_matrix_matmul_op/mkl_sparse_matrix_matmul_op.pic.o:mkl_sparse_matrix_matmul_op.cc:function tensorflow::register_kernel_0::{lambda(tensorflow::KernelDef const*)#1}::operator()(tensorflow::KernelDef const*) const::{lambda(tensorflow::OpKernelConstruction*)#1}::_FUN(tensorflow::OpKernelConstruction*):(.text._ZZNK10tensorflowL17register_kernel_0MUlPKNS_9KernelDefEE_clES3_ENUlPNS_20OpKernelConstructionEE_4_FUNES6_+0x18d): error: undefined reference to 'tensorflow::CSRMatMulOp<Eigen::ThreadPoolDevice, float>::CSRMatMulOp(tensorflow::OpKernelConstruction*)'
collect2: error: ld returned 1 exit status
Target //tensorflow/tools/pip_package:wheel failed to build
```
**how do i solve : undefined reference to 'tensorflow::CSRMatMulOp<Eigen::ThreadPoolDevice, float>::CSRMatMulOp(tensorflow::OpKernelConstruction\*)'?**
---
#### Version:
Fedora 40 lxqt desktop kernel 6.10.11-200.fc40.x86_64
(runing in virtualbox with: 8 cores and 23Gb of ram)
gcc (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3)
g++ (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3)
Python 3.12.6
GNU Make 4.4.1
cmake version 3.28.2
Bazelisk version: v1.22.0
Build label: 6.5.0
| stat:awaiting tensorflower,type:build/install,comp:core,2.17 | medium | Critical |
2,567,484,510 | rust | Lint on `as_deref` from `&&T` to `&&T` or `&T` to `&T`, to catch people thinking "deref" dereferences | I recently watched a Rust coding livestream, and someone had an `Option<&&'static str>` (obtained from `HashMap::keys()` on a `HashMap` with `&'static str` keys). They wanted to get an `Option<&'static str>`.
They initially tried reaching for `as_deref()`, because it had `deref` in it so they assumed it dereferenced. This seems like a likely trap for new developers.
I think we should flag cases where someone calls `as_deref` on a type that's statically known to contain a reference (e.g. `Option<&T>` or `Option<&&T>`, or likewise for `Result`) and gets back exactly the same type, particularly if there's a type error saying that they needed the dereferenced type. The lint could tell them they might want `.copied()` or `.cloned()` (depending on whether the type implements `Copy` or `Clone`).
Simple example:
```
fn main() {
let mut m: HashMap<&'static str, &'static str> = HashMap::new();
m.insert("hello", "world");
let k = m.keys().next();
let dereferenced: Option<&'static str> = k.as_deref();
println!("{dereferenced:?}");
}
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"obeis"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-lints,A-diagnostics | low | Critical |
2,567,593,450 | vscode | VSCode should support an install switch to enable/disable updates | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
Enabling/disabling updates of VSCode is currently possible using a JSON config file,as per: https://code.visualstudio.com/docs/supporting/faq#_how-do-i-opt-out-of-vs-code-autoupdates.
In an enterprise environment, having to take additional steps of creating config files and copying them to devices, imply to control a single setting is undesirable and adds additional admin overhead.
These enterprise devices are commonly cloud native, meaning they are AAS/Entra ID joined and managed by an MDM such as Intune, or a 3rd party MDM. This means any legacy controls such as group policy cannot be used.
In an enterprise, customers want granular control of updates and do not want a consumer experience where update versions flow ad-hoc directly to end users & prompt them to update. In addition, where VSCode has been installed in the system context such as Intune, subsequent updates will prompt the end-user with UAC (which they cannot action as they aren't local admins). This disrupts end users and creates unnecessary helpdesk calls.
The ask is for an install/command line switch that controls updates.By leveraging this, enterprise customers would have more granular control to (for example) easily disable updates at install time (set once, without additional config required). This would work with Intune and 3rd party MDMs and indeed other install methods too - addressing multi scenarios. Customers would then leverage their MDM tools and/or application catalogues to update VSCode at their preferred cadence and to their preferred version, once and testing/validation/change control has been completed.
The intention is not to disable updates (which would be bad practice), but rather to add additional flexibility and controls to accommodate a wider range of scenarios and customer demands.
| install-update,under-discussion | low | Minor |
2,567,596,376 | godot | @export_tool_button breaks when changing the script contents | ### Tested versions
- Reproducible in 4.4dev3, the web editor
### System information
I'm running MacOS, Firefox, web editor
### Issue description
After adding a new `@export_tool_button` to a script, all tool buttons break. Seems to be fixable only by reloading the entire project.
### Steps to reproduce
My script:
```gdscript
@tool
extends Node
@export_tool_button("Hello world")
var hello_world := func():
print("Hello world")
```
The button appears in the UI. When I click it, the error message appears:
```
The value of property "hello_world" is Nil, but Callable was expected.
```
"Soft reload tool script" doesn't help.
After reloading the whole project though, it works as expected:
```
Hello world
```
But add another tool button, and both of them break again:
```gdscript
@tool
extends Node
@export_tool_button("Hello world")
var hello_world := func():
print("Hello world")
@export_tool_button("Hello world2")
var hello_world2 := func():
print("Hello world2")
```
Now the first one errors with
```
Tool button action "<invalid lambda>" is an invalid callable.
```
and the second one errors with
```
The value of property "hello_world2" is Nil, but Callable was expected.
```
Reloading the whole project again, again, fixes both problems.
### Minimal reproduction project (MRP)
Sorry I couldn't figure out how to attach the web project, and I running out of time right now.
It's basically the default project with just one scene and one script, that I pasted above.
I'm confident this will show up in any project. | bug,topic:gdscript,topic:editor | low | Critical |
2,567,612,218 | pytorch | Simple explanation should be added to `capturable` parameter of `optim.RMSprop()` | ### ๐ The doc issue
[The doc](https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html) of `optim.RMSprop()` explains `capturable` parameter only in a complex way.
> Parameters
> - ...
> - capturable ([bool](https://docs.python.org/3/library/functions.html#bool), optional) โ whether this instance is safe to capture in a CUDA graph. Passing True can impair ungraphed performance, so if you donโt intend to graph capture this instance, leave it False (default: False)
> - ...
### Suggest a potential alternative/fix
So, the simple explanation something like below should be added to `capturable` parameter:
> *Setting it on CUDA(GPU) works but setting it on CPU gets error
> Parameters
> - ...
> - capturable ([bool](https://docs.python.org/3/library/functions.html#bool), optional) โ whether this instance is safe to capture in a CUDA graph. Passing True can impair ungraphed performance, so if you donโt intend to graph capture this instance, leave it False. *Setting it on CUDA(GPU) works but setting it on CPU gets error (default: False)
> - ...
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | module: optimizer,triaged,topic: docs | low | Critical |
2,567,638,101 | pytorch | some guards are not shown in the code_parts | ### ๐ Describe the bug
```python
import torch
@torch.compile(backend="eager")
def f(x):
if x.numel() >= 1024:
x += 5
return x
else:
x = x * 2
return x
x = torch.ones(2,)
torch._dynamo.mark_dynamic(x, 0)
print(f(x)[0]) # tensor(2.)
from torch._dynamo.eval_frame import _debug_get_cache_entry_list, innermost_fn
cache_entries = _debug_get_cache_entry_list(innermost_fn(f))
cache_entry = cache_entries[0]
guard, code = cache_entry.check_fn, cache_entry.code
# the guard takes the local variables of an input frame, and tells whether a re-compilation should be triggered.
import dis
dis.dis(guard)
dis.dis(code)
for code_part in guard.code_parts:
print(code_part)
```
It prints:
```text
utils_device.CURRENT_DEVICE == None
___check_global_state()
___check_torch_function_mode_stack()
check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[None], stride=[1])
((L['x']._dynamo_dynamic_indices.issubset({0})) if hasattr(L['x'], '_dynamo_dynamic_indices') else True)
```
there's one missing. when I use `TORCH_TRACE` , the tree looks like:
```text
TREE_GUARD_MANAGER:
+- RootGuardManager
| +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:460 in init_ambient_guards
| +- GLOBAL_STATE: ___check_global_state()
| +- GuardManager: source=L['x'], accessed_by=DictGetItemGuardAccessor(x)
| | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[None], stride=[1])
| | +- DYNAMIC_INDICES: ((L['x']._dynamo_dynamic_indices.issubset({0})) if hasattr(L['x'], '_dynamo_dynamic_indices') else True)
+- LAMBDA_GUARD: 2 <= L['x'].size()[0] <= 1023 # _dynamo/output_graph.py:452 in init_ambient_guards
```
we might only have `RootGuardManager` in `code_parts` , and the `LAMBDA_GUARD` part is missing.
related pr: https://github.com/pytorch/pytorch/pull/134181 (it adds the `code_parts` when cpp guard is used)
cc @ezyang @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @anijain2305
### Versions
tested on 2.6.0.dev20241004 | triaged,oncall: pt2,module: dynamic shapes,module: dynamo,module: guards | low | Critical |
2,567,643,162 | pytorch | non-deterministic issue of torch.einsum function on different GPU. | ### ๐ Describe the bug
I am using opt_einsum package in my model. When I trained my model on different GPUs, the model performance is different. After debugging, I found there is a non-deterministic issue in the torch.einsum function, which is called by opt_einsum package.
My own model, which uses the torch.einsum function, was trained on different GPUs
There is a big difference in the testing AUC between them.
A5000:
testing AUC: 0.6059907834101383
A6000:
testing AUC: 0.6728110599078341
A100:
testing AUC: 0.5714285714285714
This is the simple code to reproduce the non-deterministic issue.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
import os
import opt_einsum as oe
import math
import random
os.putenv("CUBLAS_WORKSPACE_CONFIG", ":4096:8")
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["WORLD_SIZE"] = "1"
def setup_seed(seed):
os.environ['PYTHONHASHSEED'] = str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.use_deterministic_algorithms(True)
setup_seed(42)
#np.random.seed(42)
#torch.manual_seed(42)
contract = oe.contract
np.set_printoptions(precision=16)
torch.set_printoptions(precision=16)
class TransposedLinear(nn.Module):
""" Linear module on the second-to-last dimension """
def __init__(self, d_input, d_output, bias=True):
super().__init__()
print ("--------------TransposedLinear setting---------------")
print ("d_input=", d_input)
print ("d_output=", d_output)
print ("bias=", bias)
self.weight = nn.Parameter(torch.empty(d_output, d_input))
nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) # nn.Linear default init
def forward(self, x):
print ("----------------------in--TransposedLinear------------")
print ("==========before call contract,---------")
print (x)
print (x.shape)
print ("--------weight---------")
print (self.weight)
print (self.weight.shape)
print ("-------------=========-------")
result = torch.einsum('... u l, v u -> ... v l', x, self.weight)
print ("==========after call contract,---------")
print (result)
print (result.shape)
return result
# Main function
def main():
tensor = torch.randn(16, 3072, 64).cuda()
model = TransposedLinear(3072,3072,True).cuda()
output = model (tensor)
if __name__ == "__main__":
main()
```
Test Environment:
A5000: torch 2.1.2, cuda 12.4
A6000: torch 2.1.2, cuda 11.6
A10: torch 2.4.0, cuda 12.4
A100: torch 2.4.0, cuda 12.4
Attached is the outputs on different GPUs.
[a10_output.txt](https://github.com/user-attachments/files/17265072/a10_output.txt)
[a5000_output.txt](https://github.com/user-attachments/files/17265073/a5000_output.txt)
[a100_output.txt](https://github.com/user-attachments/files/17265074/a100_output.txt)
[a6000_output.txt](https://github.com/user-attachments/files/17265075/a6000_output.txt)
### Versions
# A100 environment:
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.10 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2900.000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 24576K
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp_epp avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.2.0.post0
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchdata==0.8.0
[pip3] torchlibrosa==0.1.0
[pip3] torchmetrics==1.4.1
[pip3] torchtext==0.18.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-lightning 2.2.0.post0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchdata 0.8.0 pypi_0 pypi
[conda] torchlibrosa 0.1.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
# A10 environment:
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.10 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 1166.395
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 43008K
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp_epp avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.2.0.post0
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchdata==0.8.0
[pip3] torchlibrosa==0.1.0
[pip3] torchmetrics==1.4.1
[pip3] torchtext==0.18.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-lightning 2.2.0.post0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchdata 0.8.0 pypi_0 pypi
[conda] torchlibrosa 0.1.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
# A5000 environment:
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.7
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 6986.94
Virtualization: AMD-V
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 16 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] flake8==3.9.2
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] numpydoc==1.2
[pip3] onnx==1.13.1
[pip3] onnxruntime==1.14.1
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-lightning==1.7.7
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchdata==0.7.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchtext==0.16.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] open-clip-torch 2.7.0 pypi_0 pypi
[conda] pytorch-lightning 1.7.7 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torchaudio 2.1.0 pypi_0 pypi
[conda] torchdata 0.7.0 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchsde 0.2.5 pypi_0 pypi
[conda] torchtext 0.16.0 pypi_0 pypi
[conda] torchvision 0.16.0 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
# A6000 environment:
Collecting environment information...
PyTorch version: 2.1.2+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.13.0-51-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 510.73.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 6986.61
Virtualization: AMD-V
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 16 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.0.9
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.1.2+cu118
[pip3] torchaudio==2.1.2+cu118
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.16.2+cu118
[pip3] triton==2.1.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-lightning 2.0.9 pypi_0 pypi
[conda] pytorch-triton 2.1.0+6e4932cda8 pypi_0 pypi
[conda] torch 2.1.2+cu118 pypi_0 pypi
[conda] torchaudio 2.1.2+cu118 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchvision 0.16.2+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @mruberry @kurtamohler | triaged,module: determinism | low | Critical |
2,567,662,694 | bitcoin | Listen on random port by default (not 8333) | ### Please describe the feature you'd like to see added.
Connections to port 8333 can be recognized right away as Bitcoin P2P connections. While it is still possible to recognize Bitcoin P2P connections regardless of the port, random ports would make network-wide monitoring harder.
### Is your feature related to a problem, if so please describe it.
Network-wide monitoring.
### Describe the solution you'd like
The listening address and port of a node are propagated and saved in other nodes' databases, so the port has to be constant. Thus, after generating a random port it would need to be saved on disk (e.g. `settings.json`) and reused after restarts.
This applies to new installations. Existent ones have already propagated with port 8333 (if not changed by the node operator). So, something like: if a new installation and port is not explicitly provided, instead of using 8333 generate a random one and save it to `settings.json`.
This applies only to listening on IPv4 and IPv6 addresses.
### Please leave any additional context
This is more of a network-wide measure. Individual nodes have stronger means to protect themselves. | Feature,Brainstorming | medium | Major |
2,567,664,056 | transformers | Jitter Noise added to input being passed to experts in Switch Transformers | ### System Info
System Info
- transformers version: 4.44.2
- Platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
import torch.nn as nn
from transformers import (
SwitchTransformersConfig,
SwitchTransformersTop1Router,
)
from transformers.models.switch_transformers.modeling_switch_transformers import SwitchTransformersDenseActDense
class MySwitchTransformersSparseMLP(nn.Module):
r"""
Implementation of the Switch Transformers Sparse MLP module.
"""
def __init__(self, config: SwitchTransformersConfig, expert_class: nn.Module = SwitchTransformersDenseActDense):
super().__init__()
# Step 1: Get the correct router according to its class
self.router = SwitchTransformersTop1Router(config)
# Step 2: Get the experts
self.experts = nn.ModuleDict()
for idx in range(config.num_experts):
self.experts[f"expert_{idx}"] = expert_class(config)
def forward(self, hidden_states):
r"""
Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following:
1- Gets the `router_mask` from the router. The shape of the mask is `(batch_size, sequence_length, num_expert)`
and corresponds to the argmax of the `router_probs`. The probabilities are needed in the computation of the
hidden states : they are broadcasted to the hidden states values (can be interpreted as a scaling factor).
2- Dispatch the tokens to its associated experts. We do a classic for loop over the experts and assign for each
expert the corresponding hidden states.
"""
prev_save = hidden_states.clone()
# Step 1: Get the router_mask from the router as wel as the probabilities
router_mask, router_probs, router_logits = self.router(hidden_states)
expert_index = torch.argmax(router_mask, dim=-1)
print(torch.allclose(prev_save, hidden_states))
print(torch.mean(prev_save - hidden_states))
# The routers introduced might not always map all the tokens, to a router, which means that some hidden states
# can be unchanged from one layer to another. That is why the hidden states are cloned before updating only the seleced ones.
next_states = hidden_states.clone()
router_mask = router_mask.bool()
batch_size, seq_len, num_experts = router_mask.shape
idx_mask = router_mask.transpose(1, 2).reshape(batch_size * seq_len, num_experts).sum(dim=0)
idx_mask = torch.nonzero(idx_mask, as_tuple=True)[
0
].tolist() # length: number of "activated" expert / value: index
for idx in idx_mask:
next_states[router_mask[:, :, idx]] = getattr(self.experts, "expert_{}".format(idx))(
hidden_states[router_mask[:, :, idx]]
)
hidden_states = router_probs * next_states
return hidden_states, (router_logits, expert_index)
config = SwitchTransformersConfig()
model = MySwitchTransformersSparseMLP(config)
model.train()
in_data = torch.ones(1, 1, 768)
out = model(in_data)
```
The output is
```bash
False
tensor(-0.0001)
```
which ideally should give True and the mean difference should be zero.
This is because in `SwitchTransformersTop1Router`, the `hidden_states` are multiplied with jitter noise which persists even when you pass it to the experts.
https://github.com/huggingface/transformers/blob/e71a01a104dd663c730e494eb0b6467bb51df357/src/transformers/models/switch_transformers/modeling_switch_transformers.py#L159-L161
### Expected behavior
Ideally, no jitter noise should be present when passing the input to the experts, returning True and the mean difference as 0. | Core: Modeling,bug | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.