id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,497,466,279 | pytorch | JIT tracing a quantized model with hooks is broken | ### ๐ Describe the bug
JIT tracing a quantized model that has forward_pre_hooks throws the following error:
`RuntimeError: Couldn't find method: 'forward' on class: '__torch__.torch.ao.nn.intrinsic.quantized.modules.conv_relu.___torch_mangle_260.ConvReLU2d (of Python compilation unit at: 0x5a4cdb98abe0)'
`
**Float32, fused, and prepared models with hooks all seem to work just fine.**
Code to replicate (adapted from the Torch's own quantization example) - tested on Torch 2.4.0:
```python
import torch
# define a floating point model where some layers could be statically quantized
class M(torch.nn.Module):
def __init__(self):
super().__init__()
# QuantStub converts tensors from floating point to quantized
self.quant = torch.ao.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.relu = torch.nn.ReLU()
# DeQuantStub converts tensors from quantized to floating point
self.dequant = torch.ao.quantization.DeQuantStub()
self.handles = []
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.conv(x)
x = self.relu(x)
# manually specify where tensors will be converted from quantized
# to floating point in the quantized model
x = self.dequant(x)
return x
# create a model instance
model_fp32 = M()
input_fp32 = torch.randn(4, 1, 4, 4)
# model must be set to eval mode for static quantization logic to work
model_fp32.eval()
def test_hook(module, input):
return input
model_fp32.conv.register_forward_pre_hook(test_hook)
trace_output = torch.jit.trace(model_fp32, input_fp32)
print("Model Fp32 with hooks - Traced Successfully")
model_fp32.qconfig = torch.ao.quantization.get_default_qconfig('x86')
model_fp32_fused = torch.ao.quantization.fuse_modules(model_fp32, [['conv', 'relu']])
model_fp32.conv.register_forward_pre_hook(test_hook)
trace_output = torch.jit.trace(model_fp32_fused, input_fp32)
print("Fused model with hooks - Traced Successfully")
# Prepare the model for static quantization. This inserts observers in
# the model that will observe activation tensors during calibration.
model_fp32_prepared = torch.ao.quantization.prepare(model_fp32_fused)
model_fp32_prepared(input_fp32)
model_fp32_prepared.conv.register_forward_pre_hook(test_hook)
traced_output = torch.jit.trace(model_fp32_prepared, input_fp32)
print("Prepared model with hooks - Traced Successfully")
print("Int8 Model with hooks: UNSUCCESSFUL")
model_int8 = torch.quantization.convert(model_fp32_prepared)
torch.jit.trace(model_int8, input_fp32)
print("Without hooks: works")
model_int8.conv._forward_pre_hooks.clear()
# model_int8.conv._forward_hooks.clear()
# model_int8.conv._forward_pre_hooks_with_kwargs.clear()
torch.jit.trace(model_int8, input_fp32)
```
Outputs:
```
Model Fp32 with hooks - Traced Successfully
Fused model with hooks - Traced Successfully
Prepared model with hooks - Traced Successfully
Int8 Model with hooks: UNSUCCESSFUL
```
And the above error.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU max MHz: 5000.0000
CPU min MHz: 400.0000
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualisation: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: jit,oncall: quantization,low priority,triaged | low | Critical |
2,497,496,337 | flutter | [Conductor] Enable override of engine mirror location | Came up as part of https://github.com/flutter/flutter/pull/154363.
Framework has the ability to override and set a custom mirror. Engine does not and should. | c: new feature,P2,team-release | low | Minor |
2,497,509,938 | vscode | When "comment" text box is focused and "Review Pull Request" panel is in a sidebar, `sideBarFocus` does not get set to `true` | <!-- Please search existing issues to avoid creating duplicates. -->
I have my primary sidebar set to be on the right side. When the "comment" textbox supplied by this extension (screenshotted below) is focused, `sideBarFocus` is not set to `true`. I know this for a fact, because I have set up a keyboard shortcut with a `when` clause of _just_ `sideBarFocus`, and it doesn't work when I'm in that text box - if I'm in another text box in the sidebar (for example, the extension view) and I use the same shortcut, it works fine.

<!-- Use Help > Report Issue to prefill some of these. -->
- Extension version: v0.94.0
- VSCode Version: 1.92.2
- Commit: fee1edb8d6d72a0ddff41e5f71a671c23ed924b9
- Date: 2024-08-14T17:29:30.058Z
- Electron: 30.1.2
- ElectronBuildId: 9870757
- Chromium: 124.0.6367.243
- Node.js: 20.14.0
- V8: 12.4.254.20-electron.0
- OS: Darwin arm64 23.6.0
- Repository Clone Configuration (single repository/fork of an upstream repository): single repository
- Github Product (Github.com/Github Enterprise version x.x.x): Github.com
Steps to Reproduce:
1. Create a keyboard shortcut with a `when` clause set to `sideBarFocus`. For demo purposes, I'm using the command to show the command palette (it could be anything):
```
{
"key": "cmd+ctrl+space",
"command": "workbench.action.showCommands",
"when": "sideBarFocus"
}
```
2. Go to the "Leave a comment" textbox and try invoking that shortcut. It does not do anything.
3. Go to another panel, for example, the extensions view, click on a text box, and try to invoke the command. It works.
If this extension is in the sidebar, and my cursor is in the extension, then `sideBarFocus` should be `true` - but it's not. It's not clear to me whether or not this is a VSCode problem or a problem with this extension, however, considering that it works in other sidebar views, I want to place the blame on the extension at this point.
| bug,help wanted,webview-views | low | Minor |
2,497,515,444 | awesome-mac | ๐ Add Peakto | ### ๐ชฉ Provide a link to the proposed addition
https://cyme.io/peakto-photo-organizer-software
### ๐ณ Explain why it should be added
Exclusively for Mac users, Peakto is an unique photo organizer compatible with various photo sources (such as Lightroom, Capture One, Pixelmator, or simply photo folders), offering AI-driven features for seamless media management.
### ๐ Additional context
Peakto is the universal photo manager thatโs here to make photographers life easier.
**See Everything:**
- All-In-One View: Preview all your photos AND videos from various sources in one place.
- Easy Organization: No need to stay connected to your NAS or hard drives to organize your portfolio.
- New Video Player: Quickly browse videos with contact sheets for easy navigation.
**Organize Everything:**
- Central Hub: Annotate, delete, and manage photos and videos from one interface.
- Tag People: Use AI to identify and tag people across your photos effortlessly.
- Create Selections: Make selections from different sources and sync edits across all your software.
**Rediscover:**
- Smart Sorting: Enjoy automatic sorting by category and map views.
- Timeline View: Navigate photos by date with the new timeline feature.
**Automate:**
- AI Organization: Let AI handle keyword assignments and content recognition.
- Video Search: Find specific scenes in videos with simple descriptions.
Peakto 2.0 brings together your photos and videos from folders, hard drives, Apple Photos, Lightroom, Luminar, Capture One, and more into one powerful interface. Itโs the perfect tool to manage your entire photo library and streamline your workflow.

### ๐งจ Issue Checklist
- [X] I have checked for other similar issues
- [X] I have explained why this change is important
- [X] I have added necessary documentation (if appropriate) | addition | low | Minor |
2,497,517,747 | ollama | Improve error reporting with old or missing AMD driver on windows (unable to load amdhip64_6.dll) | I was trying to solve this issue that prevented me from using AMD GPU
```
time=2024-08-30T09:43:00.852-05:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll,
please make sure to upgrade to the latest amd driver: The specified module could not be found."
```
Compared with the message for searching Nvidia, the AMD message is really minimal, here is the Nvidia output
```
time=2024-08-30T09:43:00.768-05:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=cudart64_*.dll
time=2024-08-30T09:43:00.768-05:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.2\\bin\\cudart64_*.dll* ..........]"
```
As you can see, ollama does not tell where it tries to find `amdhip64_6`. https://github.com/ROCm/ROCm/issues/3418#issuecomment-2253379050 said the DLL should be in ` C:\Windows\system32\amdhip64_6.dll`, which is not the case as my dll is in `C:\Program Files\AMD\ROCm\6.1\bin`. It turns out that ollama uses the environment variable `PATH` to search for the DLL, which is not very clear just by reading the console debug output. To help future users, I'd like to suggest the following improvements:
1. Print out the search path when searching for AMD GPU (like the Nvidia log above)
2. Suppress Nvidia message when `CUDA_VISIBLE_DEVICES=-1` and AMD message when `HIP_VISIBLE_DEVICES=-1` to simplify the log (there is no need to print out the search path if one does not need Nvidia or AMD driver) | feature request,windows,amd | low | Critical |
2,497,549,394 | tensorflow | [TFLite] Could log level control be added to the TFLite C API? | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
No
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
[`tflite::LoggerOptions::SetMinimumLogSeverity()`](https://github.com/tensorflow/tensorflow/blob/daa9e4e0d0a04626fb97cf9b11c1ae6be46c6517/tensorflow/lite/logger.h#L37) provides a method for controlling the log level in C++.
This allows more detailed logging than INFO or limits logging to ERROR or higher in prod builds.
However, we cannot use this feature from the C API.
I think this enhancement for the C API is useful and good for future additions on other platforms, such as Swift.
Is it possible to add bindings for the `tflite::LoggerOptions` class to the C API ?
I am sorry if I missed this features of the C API.
I will be happy to help you, for example, try to create a PR.
### Standalone code to reproduce the issue
```shell
n/a
```
### Relevant log output
_No response_ | stat:contribution welcome,awaiting review,type:feature,comp:lite | low | Critical |
2,497,550,526 | TypeScript | Confusing/inconsistent error messages when erroring on disallowed merge | ### ๐ Search Terms
Duplicate identifier 'VueI18n'.
Subsequent property declarations must have the same type.
### ๐ Version & Regression Information
- This changed between versions 5.4.5 and 5.5.2
Behavior is the same with 5.7.0-dev.20240830
### โฏ Playground Link
_No response_
### ๐ป Code
shims-augment.d.ts:
```ts
/**
* Overloads VueI18n interface to avoid needing to cast return value to string.
* @see https://github.com/kazupon/vue-i18n/issues/410
* It can be resolved with vue-i18n >= 9 (that only works with Vue 3 currently)
*/
import VueI18n from 'vue-i18n/types'
declare module 'vue-i18n/types' {
export default class VueI18n {
t (key: Path, locale: Locale, values?: Values): string
t (key: Path, values?: Values): string
}
}
declare module 'vue/types/vue' {
interface Vue {
$t: typeof VueI18n.prototype.t
}
}
```
Excerpt of [vue-i18n/types/index.d.ts](https://github.com/kazupon/vue-i18n/blob/v8.x/types/index.d.ts):
```ts
declare namespace VueI18n {
type Path = string;
type Locale = string;
type FallbackLocale = string | string[] | false | { [locale: string]: string[] }
type Values = any[] | { [key: string]: any };
type Choice = number;
interface MessageContext {
list(index: number): unknown
named(key: string): unknown
linked(key: string): VueI18n.TranslateResult
values: any
path: string
formatter: Formatter
messages: LocaleMessages
locale: Locale
}
type MessageFunction = (ctx: MessageContext) => string;
type LocaleMessage = string | MessageFunction | LocaleMessageObject | LocaleMessageArray;
interface LocaleMessageObject { [key: string]: LocaleMessage; }
interface LocaleMessageArray { [index: number]: LocaleMessage; }
interface LocaleMessages { [key: string]: LocaleMessageObject; }
type TranslateResult = string | LocaleMessages;
}
declare class VueI18n {
constructor(options?: VueI18n.I18nOptions)
readonly messages: VueI18n.LocaleMessages;
readonly dateTimeFormats: VueI18n.DateTimeFormats;
readonly numberFormats: VueI18n.NumberFormats;
readonly availableLocales: VueI18n.Locale[];
locale: VueI18n.Locale;
fallbackLocale: VueI18n.FallbackLocale;
missing: VueI18n.MissingHandler;
formatter: VueI18n.Formatter;
formatFallbackMessages: boolean;
silentTranslationWarn: boolean | RegExp;
silentFallbackWarn: boolean | RegExp;
preserveDirectiveContent: boolean;
pluralizationRules: VueI18n.PluralizationRulesMap;
warnHtmlInMessage: VueI18n.WarnHtmlInMessageLevel;
postTranslation: VueI18n.PostTranslationHandler;
sync: boolean;
t(key: VueI18n.Path, values?: VueI18n.Values): VueI18n.TranslateResult;
t(key: VueI18n.Path, locale: VueI18n.Locale, values?: VueI18n.Values): VueI18n.TranslateResult;
}
declare module 'vue/types/vue' {
interface Vue {
readonly $i18n: VueI18n & IVueI18n;
$t: typeof VueI18n.prototype.t;
}
}
```
### ๐ Actual behavior


On top of that, since `$t` is in error, my whole code base reports an error whenever it uses the returned value of `$t` as a string.
### ๐ Expected behavior
No error were reported with 5.4.5
### Additional information about the issue
I apologize if it's a feature, not a bug, but I wasn't able to find any info on this in the release notes.
I know that narrowing is not safe here, I'm wondering why it was allowed with old Typescript, and not anymore. | Possible Improvement | low | Critical |
2,497,579,900 | deno | Bug: No named export `gql` in `graphq-tag` found | Reported in Discord: https://discord.com/channels/684898665143206084/1279102082452029520/1279102082452029520
## Steps to reproduce
Run this snippet:
```ts
import { gql } from "npm:graphql-tag@2.12.6";
console.log(gql);
```
Output:
```sh
error: Uncaught SyntaxError: The requested module 'npm:graphql-tag@2.12.6' does not provide an export named 'gql'
import { gql } from "npm:graphql-tag@2.12.6";
^
at <anonymous> (file:///Users/marvinh/dev/test/deno-graphql-tag/main.ts:1:10)
```
The problem occurs because Deno picks the wrong entry point.
```jsonc
"main": "./main.js", // <- deno picks this one
"module": "./lib/index.js", // <- ...but it should have picked this
"jsnext:main": "./lib/index.js",
```
Version: Deno 1.46.2+d71eebb
| bug,node compat,node resolution | low | Critical |
2,497,598,544 | rust | Suggest wrapping code into `fn main() {}` when encountering `let` within `main.rs` | ### Code
```Rust
let hello = "xyz";
```
### Current output
```Shell
error: expected item, found keyword `let`
--> src/main.rs:1:1
|
1 | let hello = "xyz";
| ^^^ consider using `const` or `static` instead of `let` for global variables
error: could not compile `thing` (bin "thing") due to 1 previous error
```
### Desired output
```Shell
error: expected item, found keyword `let`
--> src/main.rs:1:1
|
1 | let hello = "xyz";
| ^^^^^^^^^^^^^ consider wrapping this code within `fn main() {}`
error: could not compile `thing` (bin "thing") due to 1 previous error
```
### Rationale and extra context
Sometimes users copy code from a crate's readme or doctests and expect it to run as is. In some cases like `reqwest` [^1] or `sqlx` [^2] it does, but in others like ours [^3] it doesn't.
In our case we are thinking of adding `fn main() {}` to the readme, but this doesn't solve the case where e.g. the doctests don't have it, which thanks to how rustdoc works [^4] they don't in most cases.
If the file is `main.rs`, `fn main() {}` hasn't been declared yet and `let` is encountered, the compiler should be able to suggest wrapping the code into `fn main() {}`.
[^1]: https://github.com/seanmonstar/reqwest/tree/v0.12.7?tab=readme-ov-file#example
[^2]: https://github.com/launchbadge/sqlx/tree/v0.8.1?tab=readme-ov-file#quickstart
[^3]: https://github.com/lettre/lettre/issues/983
[^4]: https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html#pre-processing-examples
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.82.0-nightly (1f12b9b0f 2024-08-27)
binary: rustc
commit-hash: 1f12b9b0fdbe735968ac002792a720f0ba4faca6
commit-date: 2024-08-27
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
### Anything else?
_No response_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"AleksaBajat"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler | low | Critical |
2,497,622,638 | ant-design | DatePicker.RangePicker ็นๅฎๅบๆฏไธ้ๆฉ็ปๆๆฅๆๅ๏ผไธ่งฆๅ onChange ไบไปถ | ### Reproduction link
[](https://stackblitz.com/edit/antd-reproduce-5x-a66ggu?file=demo.tsx)
```tsx
import React, { useState } from 'react';
import { DatePicker } from 'antd';
import dayjs from 'dayjs';
const App: React.FC = () => {
const [date, setDate] = useState([dayjs('2024-08-20'), dayjs('2024-09-30')]);
return (
<DatePicker.RangePicker
disabled={[true, false]}
allowClear={false}
placeholder={['ๅผๅงๆฅๆ', '็ปๆๆฅๆ']}
disabledDate={(current) => {
return current < dayjs().startOf('day');
}}
value={date}
onChange={setDate}
/>
);
};
export default App;
```
### Steps to reproduce
้ๆฉ็ปๆๆถ้ดๅณๅฏๅค็ฐ
### What is expected?
้ๆฉ็ปๆๆถ้ดๅ๏ผๆญฃๅธธ่งฆๅ onchange ไบไปถ
### What is actually happening?
้ๆฉ็ปๆๆถ้ดๅ๏ผไธ่งฆๅ onchange ไบไปถ
| Environment | Info |
| --- | --- |
| antd | 5.20.3 |
| React | 18.3.3 |
| System | mac |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ก Feature Request,Inactive | low | Major |
2,497,629,472 | go | x/tools/gopls: feature: repair the syntax | When editing, it is common for the code to be ill-formed, because for example you have opened a block with `{` but not yet closed it, or pasted some code into the middle of a function and not yet integrated it. A parser with good error recovery (i.e. better than go/parser, at least [for now](https://github.com/golang/go/issues/58833)) can often produce an AST with minimal lossage, implicitly inserting the missing close braces and suchlike. In principle, the difference between the pretty-printed repaired AST and the actual source could be a offered as a completion candidate or a quick fix, "repair the syntax".
Even with our not-very-fault-tolerant parser, I suspect we could do a good job with modest effort by a pre-scan of the input file from both ends that matches each paren with its partner, exploiting indentation (column numbers). This would quickly localize the region of damage to a particular function, or even a block, statement, or expression within it. We would then parse and pretty-print just the errant subtree, causing missing parens (etc) to be inserted; Bad{Expr,Stmt,Decl}s would be replaced by `/* ... */` comments. The result would be a well-formed tree that would allow the user to save, gofmt, and perhaps build and run tests. It may also re-enable use of cross-references and other features that are crippled in the vicinity of the syntax error. (And given the parser's current lack of fault tolerance, the "vicinity" may be more accurately described as a "blast radius".)
(The inspiration was a conversation about LLMs with @josharian.) | FeatureRequest,gopls,Tools | low | Critical |
2,497,669,200 | stable-diffusion-webui | [Bug]: issue with tokenizers | ### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [x] The issue has been reported before but has not been fixed yet
### What happened?
can't open because of the tokenizers can't build
### Steps to reproduce the problem
idk
it just pop up
### What should have happened?
it run as usual but now it bonked
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
from my console can't open the webUI
### Console logs
```Shell
Traceback (most recent call last):
File "/workspace/file/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/workspace/file/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/workspace/file/stable-diffusion-webui/modules/launch_utils.py", line 423, in prepare_environment
run_pip(f"install -r \"{requirements_file}\"", "requirements")
File "/workspace/file/stable-diffusion-webui/modules/launch_utils.py", line 144, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/file/stable-diffusion-webui/modules/launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements.
Command: "/home/gitpod/.pyenv/versions/3.12.4/bin/python" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
```
### Additional information
_No response_ | bug-report | medium | Critical |
2,497,700,301 | flutter | [native_assets] Stop doing dry run in `flutter run` | We introduced `dry-run`s to build hooks due to `flutter run` running concurrently with the inner `flutter assemble`.
Now we have made the `native_assets_builder` wait on concurrent invocations, we should be able to do full builds in `flutter run` which will then be picked up by `flutter assemble` also invoking the `native_assets_builder`.
This will simplify the user experience, as there is no longer any reasoning about dry runs or not.
For more info:
* https://github.com/dart-lang/native/issues/1485
| P2,team-tool,triaged-tool | low | Major |
2,497,755,055 | react-native | [RN][Android] Fix <Image/> Logbox feedback loop | ### Description
When src is null, the `<Image/>` component [displays a warning in `LogBox`](https://github.com/facebook/react-native/blob/main/packages/react-native/ReactAndroid/src/main/java/com/facebook/react/views/image/ReactImageView.kt#L670-L682).
The feedback loop:
* Some component renders an `<Image/>`
* Fabric preallocates the image, setting `null` src.
* `ReactImageView` receives `null` src, emits warning.
* Warning [dispatches native -> js call](https://github.com/facebook/react-native/blob/main/packages/react-native/ReactAndroid/src/main/java/com/facebook/react/util/RNLog.kt#L94), rendering `LogBox`, which uses `<Image/>`
* Fabric preallocates the image, setting `null` src.
* Goto 3
This feedback loop was originally deactivated in bridgeless.
**Why:** In step 4, [this line](https://github.com/facebook/react-native/blob/main/packages/react-native/ReactAndroid/src/main/java/com/facebook/react/util/RNLog.kt#L94) would never execute: [context.hasActiveReactInstance()](https://github.com/facebook/react-native/blob/main/packages/react-native/ReactAndroid/src/main/java/com/facebook/react/util/RNLog.kt#L93) would always return false. (`ThemedReactContext` was broken but we fixed it).
We should fix this feedback loop, and re-enable this warning in bridgeless.
Hint at a potential fix:
* Prevent view preallocation from assigning a null src to preallocated views
### Steps to reproduce
N/A
### React Native Version
0.75.2
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
N/A
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://github.com/facebook/react-native/packages/rn-tester
### Screenshots and Videos
_No response_ | Platform: Android,Component: Image,Needs: Repro,Needs: Attention,Type: New Architecture | low | Critical |
2,497,824,015 | godot | Error [ Condition "p_elem->_root" is true. ] spamming the console | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Mac Mini M1 with newest MacOS
### Issue description
I have a very simple script attached to a TileLayer Node. As soon as it gets triggered, it spams the console with
`Condition "p_elem->_root" is true.`
errors. The weird part is that if I add (as built in) or attach (as a gdscript file) the same script to another TileLayer node, it works fine. The project is so simple that I can't see what could be causing it and was unable to isolate it.
### Steps to reproduce
This is the script that's causing the errors:
```
extends TileMapLayer
# metadata container
#@export var data : Node
# metadata array names that hold the relevant data
#@export var slide_tiles : String # LayerTransformPlayer
#@export var slide_phase : String # LayerTransformPlayerTimer
#const Tile_Size = 100
@warning_ignore("unused_parameter")
func _use_tile_data_runtime_update( C : Vector2i ): return true # Coordinates, Vector2i
@warning_ignore("unused_parameter")
func _process ( delta ) : notify_runtime_tile_data_update()
@warning_ignore("unused_parameter")
func _tile_data_runtime_update ( C: Vector2i, TD: TileData):
pass
```
As it can be seen it does nothing (I commented out and deleted everything). From my testing it looks like just using "_tile_data_runtime_update" makes it go bad.
### Minimal reproduction project (MRP)
Unfortunately I can't retrigger it outside of my own project (which is a super simple test project with barely a few nodes in it). The problem is that the script works fine on other TileLayer nodes. There's nothing special about the Node that can't handle this script. I'm not using any terrains, physics, pathfinding, nothing. It's just a super simple TileLayer. | needs testing,topic:2d | medium | Critical |
2,497,824,616 | PowerToys | tray icons for all powertoys apps | ### Description of the new feature / enhancement
The color picker and more are very useful, having the in the tray helps but i wish you could put apps icon in the system tray

### Scenario when this would be used?
To access apps fast
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,497,874,467 | godot | Inspector folding not working properly. | ### Tested versions
v4.4.dev1.official [28a72fa43]
### System information
w10 64
### Issue description
Watch the video, the indicators take on states without me clicking on them, then to correct it I have to make several extra clicks to get the indicator back to its correct state
https://github.com/user-attachments/assets/9e1847a8-a728-48ff-9fe2-317cc3158ce7
### Steps to reproduce
see the video
### Minimal reproduction project (MRP)
... | bug,topic:editor,confirmed | low | Major |
2,497,887,645 | vscode | Zoom : The left commonly used section tree view content gets disappear when viewing the page at 200% browser zoom and a 1280-pixel viewport width. | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: 14.6.1 (23G93)
Steps to Reproduce:
1. On VSCode while the left commonly used section tree view content gets disappear when viewing the page at 200% browser zoom and a 1280-pixel viewport width.
2. Open Settings to view the tree view of User | Workspace. Zoom in the browser and the tree view is disappeared. For users with low vision who need to resize text will be prevented from accessing this tree view content.
<img width="2559" alt="vscode zoom defect" src="https://github.com/user-attachments/assets/cbf89c75-c5f7-4757-a537-1dd28a049a02">
| bug,accessibility,settings-editor | low | Critical |
2,497,890,130 | go | cmd/compile: deadstores and nilchecks solvable after memory to ssa renames are not handled | ### Go version
go version devel go1.24-7fcd4a7 Mon Aug 19 23:36:23 2024 +0000 linux/amd64
### What did you do?
I was reading https://www.dolthub.com/blog/2024-08-23-the-4-chan-go-programmer and there was a piece of code that caught my attention (keep in mind that it is a contrived example, but could reveal a bigger problem). So I decided to check the generated assembly (https://go.godbolt.org/z/oxqbPnPxn).
```go
package test
func f() {
i := 1
setInt1(&i)
}
func setInt1(i *int) {
setInt2(&i)
}
func setInt2(i **int) {
setInt3(&i)
}
func setInt3(i ***int) {
setInt4(&i)
}
func setInt4(i ****int) {
****i = 100
}
```
### What did you see happen?
In some of the functions there were redundant instructions (marked with `; REDUNDANT`).
`setInt4` is compiled to:
```asm
MOVQ (AX), AX
MOVQ (AX), AX
MOVQ (AX), AX
MOVQ $100, (AX)
RET
```
`setInt3` is compiled to:
```asm
MOVQ AX, command-line-arguments.i+8(SP) ; REDUNDANT
MOVQ (AX), AX
MOVQ (AX), AX
XCHGL AX, AX
MOVQ $100, (AX)
RET
```
`setInt2` is compiled to:
```asm
MOVQ AX, command-line-arguments.i+8(SP) ; REDUNDANT
LEAQ command-line-arguments.i+8(SP), AX ; REDUNDANT
MOVQ (AX), AX ; REDUNDANT
MOVQ (AX), AX
XCHGL AX, AX
XCHGL AX, AX
MOVQ $100, (AX)
RET
```
`setInt1` is compiled to:
```asm
(...stack handling...)
MOVQ AX, command-line-arguments.i+24(SP) ; REDUNDANT
LEAQ command-line-arguments.i+24(SP), AX ; REDUNDANT
MOVQ AX, command-line-arguments.i(SP) ; REDUNDANT
LEAQ command-line-arguments.i(SP), AX ; REDUNDANT
MOVQ (AX), AX ; REDUNDANT
MOVQ (AX), AX ; REDUNDANT
NOP
XCHGL AX, AX
XCHGL AX, AX
MOVQ $100, (AX)
(...stack handling...)
RET
```
### What did you expect to see?
I expected to see no redundant instructions. | NeedsInvestigation,compiler/runtime | low | Major |
2,497,904,670 | ollama | TensorRT Support | Does ollama leverage TensorRT and if not, can support for it be added? | feature request | low | Minor |
2,497,908,545 | vscode | Content that appears on hover or focus may be dismissed by the user | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: 14.6.1 (23G93)
Steps to Reproduce:
1. The setting is not specific to the current profile, and will retain its value when switching profiles" tooltip content that appears on hover, obscures other content, and cannot be dismissed without moving focus or the cursor.
2. Screen magnification users will have difficulty moving the cursor or focus off the trigger while still keeping the content behind the revealed content in view. Keyboard users will be unable to review the content behind the revealed content.
3. The best way to do this is to allow the hover/focus content to be hidden with the Esc key.
<img width="1389" alt="Hover on the content" src="https://github.com/user-attachments/assets/2259eacb-ea72-4b76-8993-9fd0b03cf70e">
| bug,accessibility,settings-editor,confirmed | low | Critical |
2,497,911,538 | tensorflow | tf.math.floordiv produces incorrect result when the denominator is `-inf` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Based on the documentation https://www.tensorflow.org/api_docs/python/tf/math/floordiv, `tf.math.floordiv` should be equivalent to python's `//` operator. However, when the `x=1.4` and `y=-np.inf`, `tf.math.floordiv` outputs `-0.0` while `//` outputs `-1.0`.
I also checked Numpy and PyTorch's APIs, both output `-1.0`.
It seems that the implementation of `tf.math.floordiv` is different from others, it would be nice if you can fix the implementation inconsistency, or make this inconsistency in the documentation.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import torch
import numpy as np
a = 1.4
b = -np.inf
print("Numpy's result: ", np.floor_divide(a, b))
print("Python's // result: ", a // b)
print(f"TF's result: {tf.math.floordiv(a, b)}")
print(f"PyTorch's result: {torch.floor_divide(torch.tensor(a, dtype=torch.float32), torch.tensor(b, dtype=torch.float32))}")
```
```
### Relevant log output
```shell
Numpy's result: -1.0
Python's // result: -1.0
TF's result: -0.0
PyTorch's result: -1.0
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,497,947,804 | TypeScript | Design Meeting Notes, 8/30/2024 | # Adding Compile-Caching Entry Points
#59720
* Node.js now supports V8 compilation caching.
* Recent PR to Node.js to add `module.enableCompilerCache()`.
* Previously were ways to do this with 3rd party packages on npm.
* Node.js doesn't do this by default, but things like ts-node does on supported versions of Node.js.
* What's the downside?
* You need to have `require` to make this work, awkward for ESM.
* This is being worked on.
* tsc and tsserver start up about 2.3x faster.
* For executables this is a no-brainer. We can actually use `import()` there too.
* For libraries, questionable. So the PR doesn't do it.
* The PR must introduce a new entry point for the library. Is this worth the perf improvement?
* Yes.
* Just add a comment telling people what's going on and it's fine.
* Another risk - in development we recompile over and over, and that can pollute the cache.
* The files are pretty small - not a huge risk.
* We can add `hereby clear-node-cache`.
* Not going to be immediate - Node-specific work would need to be back-ported in in Node.js 22; otherwise only noticeable in Node 23+. | Design Notes | low | Minor |
2,497,973,321 | pytorch | Fakifying subclass tensors that don't implement have certain metadata | ### ๐ Describe the bug
`RuntimeError: Internal error: NestedTensorImpl doesn't support sizes. Please file an issue.` came up when I'm trying to save tensor metadata when compiling saved tensor hooks: https://github.com/pytorch/pytorch/pull/134754/files#r1737877026.
To correctly trace the unpack hook and the downstream uses of the unpacked tensor, we need the correct metadata for the returned Proxy. But not all subclasses seem to have these metadata, e.g. NestedTensor.
Is there a proper FakeTensor representation for a NestedTensor? Is an alternative to move compiled autograd tracing to pre-dispatch? cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @ezyang @albanD @chauhang @penguinwu @bdhirsh
### Versions
main | triaged,module: nestedtensor,tensor subclass,oncall: pt2,module: compiled autograd | low | Critical |
2,498,010,163 | react-native | Can't fetch the source map while using the experimental debugger | ### Description
I'm trying to use the new experimental debugger in our RN project, but in the DevTool window it says this:
> Failed to fetch source map http://10.39.25.45:8081/index.map?platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&app=com.teslamotors.enterpriseapp: remote fetches not permitted
and can't fetch the source map, thus no source code is shown.
Here're some interesting I've found:
- I can access this URL in Safari and in CLI using `wget` and `curl`, but I got this when I open it in Chrome:
<img width="1907" alt="image" src="https://github.com/user-attachments/assets/1e34d550-0b35-4d24-9f78-444326620db5">
- I noticed this using Chrome's dev tools:
https://github.com/user-attachments/assets/54f9a1d7-cc13-4b0f-a2ce-30dd38225ce6
The "content download" always gets killed after around 3.5s, and then it becomes "Aw Snap!"
- I tried to use the experimental debugger on the Expo tutorial's starter app, which works fine (I think it's using the experimental debugger, instead of the old one?)
- Our source map has ~200MB size
My assumption is that because our source map is so big, Chrome isn't able to download it within a time limit (so the download is always canceled after 3.5s) but I have no idea of how to verify that. Not sure if this is a Chrome bug or RN bug but I'll post it here anyway.
### Steps to reproduce
Just launch the experimental debugger on some huge app with ~200MB source map, I guess?
### React Native Version
0.73.6
### Affected Platforms
Build - MacOS
### Output of `npx react-native info`
```text
npx react-native info
WARNING: You should run npx react-native@latest to ensure you're always using the most current version of the CLI. NPX has cached version (0.73.6) != current release (0.75.2)
info Fetching system and libraries information...
System:
OS: macOS 14.5
CPU: (12) arm64 Apple M3 Pro
Memory: 807.53 MB / 36.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.12.0
path: /usr/local/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 8.19.2
path: ~/.nvm/versions/node/v18.12.0/bin/npm
Watchman:
version: 2024.07.15.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/kainzhong/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.2
- iOS 17.2
- macOS 14.2
- tvOS 17.2
- watchOS 10.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.15989.150.2411.11948838
Xcode:
version: 15.1/15C65
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.7.5
path: /Users/kainzhong/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.73.6
wanted: 0.73.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
info React Native v0.75.2 is now available (your project is running on v0.73.6).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.75.2
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.75.2
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
> Failed to fetch source map http://10.39.25.45:8081/index.map?platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&app=com.teslamotors.enterpriseapp: remote fetches not permitted
```
### Reproducer
I really don't know how to reproduce this on smaller projects
### Screenshots and Videos
Included some in the description. | ๐Networking,Debugger,Newer Patch Available,Resolution: Answered | low | Critical |
2,498,069,738 | rust | Point at unit structs in type errors when they are the pattern of a let binding | We currently already do this when the unit struct is from the current crate:
```
error[E0308]: mismatched types
--> fi.rs:8:9
|
1 | struct percentage;
| ----------------- unit struct defined here
...
8 | let percentage = 4i32;
| ^^^^^^^^^^ ---- this expression has type `i32`
| |
| expected `i32`, found `percentage`
| `percentage` is interpreted as a unit struct, not a new binding
| help: introduce a new binding instead: `other_percentage`
```
but we must do that also when they come from other crates as well.
_https://hachyderm.io/@LGUG2Z/113052067582966834_ | A-diagnostics,T-compiler,D-confusing,D-papercut,D-terse | low | Critical |
2,498,091,104 | flutter | Ability to change the name of the APK/IPA when building | ### Use case
Currently on Android and iOS the default name of the final artifacts is something like:
```
app-release.aab
app-release.apk
Runner.app
MyApp.ipa
```
It's generic and with the name only I can't distinguish the version on where it came from. Specially on "whitelabel" projects, that can generate many artifacts for different clients or modules.
### Proposal
But both Gradle and the Info.plist containg ways to change the name of the final artifact.
I haven't used the one for iOS, but on Android I've used a configuration to change the name to include a variant and version, so the final package would be generated with a name that would make clear which version it is. Like:
```
MyCustomApp-v1.3.2.apk
MyProduct_ClientName_v3.2.1.apk
```
For Android would be something like the below:
```
android {
tasks.whenTaskAdded {
val newFileName = "${baseName}_${apkVersionName()}"
doLast {
val apkFile = outputs.files.files.last().listFiles().find { it.extension == "apk" }
?: throw RuntimeException("APK file not found")
apkFile.renameTo(File(apkFile.parent, "$newFileName.apk"))
}
}
}
```
For iOS I've readed that this are the variables on the Info.plist to change in order to change the name of the final artifact:
```
<key>CFBundleDisplayName</key>
<string>MyApp</string>
<key>CFBundleName</key>
<string>MyApp</string>
``` | c: new feature,platform-android,platform-ios,tool,c: proposal,P3,team-tool,triaged-tool | low | Minor |
2,498,144,525 | flutter | tech-debt: move `readJsonResults` to utils | This method is used by more than just microbenchmarks. It should be moved.
https://github.com/flutter/flutter/blob/8c1a93508b4ffa813cea0648f0dc85ca5b458de9/dev/devicelab/lib/microbenchmarks.dart#L12 | team-infra,P3,c: tech-debt,triaged-infra | low | Minor |
2,498,164,356 | next.js | Error: headers was called outside a request scope, when using Authjs v5.0.0-beta.19 and next dev --turbo | ### Link to the code that reproduces this issue
https://github.com/cogb-jclaney/authjs-issue
### To Reproduce
Hi, I'm bringing this issue https://github.com/nextauthjs/next-auth/issues/11076 to here as its related to turbopack.
<img width="972" alt="image" src="https://github.com/user-attachments/assets/a3d1123d-1640-4266-8f2a-ed55632069b0">
### Current vs. Expected behavior
Expected behavior should be to work, as it does when there is no --turbo flag.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:21 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.0.0
npm: 10.5.1
Yarn: 3.6.3
pnpm: 9.8.0
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: 14.2.7
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Turbopack | low | Critical |
2,498,168,607 | transformers | prune_heads() method for AutoModelForCausalLM | ### Feature request
We have `prune_heads()` method for `AutoModel` class, but not for `AutoModelForCausalLM`. Please provide `prune_heads()` method to `AutoModelForCausalLM` class.
### Motivation
Mechanistic interpretability study from the paper [Are Sixteen Heads Really Better than One?](https://arxiv.org/abs/1905.10650) explore how pruning attention heads effect the model's (BERT) performance. I want to do something similar for LLMs. Since LLMs are supported via `AutoModelForCausalLM` class, I am unable to do this experiment.
### Your contribution
Just getting started with HuggingFace. Not confident about implementing this feature on my own as of now. | Feature request | low | Major |
2,498,198,969 | deno | Environment variable for npm auto install | We should add an environment variable (ex. `DENO_NPM_AUTO_INSTALL=1`) that someone can set to get the Deno 1.x auto-install behaviour for those who like that more. | feat | low | Minor |
2,498,212,690 | flutter | CI builds for windows desktop can fail if Visual Studio installation is incomplete | For example: https://ci.chromium.org/ui/p/flutter/builders/try/Windows%20build_tests_1_7/16262/infra
Note:
1. In step 10, we check if Visual Studio is already installed (https://flutter.googlesource.com/recipes/+/663697664d556d1e8725ac6ab2d53dd972b5145d/recipe_modules/os_utils/api.py#837). We are only looking for the version.
2. However, the actual output of vswhere.exe has the field `"isComplete": false` https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8738150602716059217/+/u/VSBuild/Detect_installation/json.output
3. The tool parses this `isComplete` field and will fail to build a windows app if it is not complete: https://github.com/flutter/flutter/blob/7aace1c5f97f5b433b08468f9712e8c5c29e8e0d/packages/flutter_tools/lib/src/windows/visual_studio.dart#L545 | team-infra,P2,triaged-infra | low | Minor |
2,498,223,288 | pytorch | SyntaxError when Running repro.py | ### ๐ Describe the bug
(Might be a duplicate of #128830)
When running a repro.py generated with `TORCHDYNAMO_REPRO_AFTER="aot" TORCHDYNAMO_REPRO_LEVEL=4` on nightly (`2.5.0.dev20240830+cu124`), I run into the following error:
```
self.fw_graph = <lambda>()
^
SyntaxError: invalid syntax
```
Quite odd! If I had to guess it's because the instability is in the backwards pass. Could also be that it is interacting poorly with Flex Attention.
```
class Repro(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.fw_graph = <lambda>()
self.joint_graph = <lambda>()
self.mask_graph = <lambda>()
```
### Versions
2.5.0.dev20240830+cu124
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo,module: minifier | low | Critical |
2,498,260,519 | rust | Suboptimal Assembly Output from a Modulo on a Power of Two | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#[no_mangle]
pub fn index(key: usize, buckets: usize) -> usize {
assert!(buckets.is_power_of_two());
key % buckets
}
```
Since `assert!(buckets.is_power_of_two())` is run (and `.is_power_of_two()` is an inbuilt function), the latter modulus operation should reduce to `key & (buckets - 1)`.
Instead, the outputted assembly [here](https://godbolt.org/z/r8564sEfE) shows that this reduction does not take place.
### Meta
The compiler in the GodBolt link is version 1.80.0. | A-LLVM,I-slow,T-compiler,llvm-fixed-upstream,C-optimization | low | Critical |
2,498,330,726 | next.js | Import .ts files in Instrumentation | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
I can use in my tsconfig.json:
- allowImportingTsExtensions:true
To use:
```javascript
const path = 'test.ts'
await import(path)
```
However, when I will run the same code inside of the Instrumentation I get:
- TypeError [ERR_UNKNOWN_FILE_EXTENSION]: An error occurred while loading instrumentation hook: Unknown file extension ".ts"
Is the Instrumenation not working with the tsconfig.json?
### Expected Behavior
Instrumenation works with dynamic import of .ts files
| examples | low | Critical |
2,498,428,115 | godot | Export MeshLibrary - Apply MeshInstance Transform does not work for GLTF-based Node3D | ### Tested versions
- Reproducible in 4.3.stable, 4.2.2
### System information
Godot v4.3.stable - macOS 14.6.1 - Vulkan (Forward+) - integrated Apple M3 Max - Apple M3 Max (16 Threads)
### Issue description
When using Scene -> Export As -> MeshLibrary with `Apply MeshInstance Transforms` enabled, the transform to Node3Ds that are imported from a GLTF are not added to the MeshLibrary.
Due to the explicit phrasing of "`MeshInstance` Transforms" I believe this is expected behaviour, but it's unexpected behavior from a user perspective. Tilesets frequently come in GLTF format, and it is very confusing to enable "Apply [...] Transform", but not have it applied to the exported meshes.
I can understand if this is an immediate close bug, but I was unable to find any documentation of this limitation. Developers should be aware that if they PoC with MeshInstances, switching to GLTF assets later will not work if any transform is needed.
The only workaround I've found is to manually update the MeshLibrary in the editor for the transform.
### Steps to reproduce
* Add GLTF imported scene to a new scene and translate it.
* Export with "Apply MeshInstance Transforms"
* Edit the MeshLibrary and examine the offset fields for the Mesh Transform
Demo:
(Added grass tile from kaykit midevil hexagon pack, and translated it up 1.0)
<img width="1323" alt="Screenshot 2024-08-30 at 6 06 10โฏPM" src="https://github.com/user-attachments/assets/082036cb-d3cd-459b-95b2-4c215411ea9d">
(Export with "Apply MeshInstance Transforms" enabled)
<img width="1279" alt="Screenshot 2024-08-30 at 6 07 01โฏPM" src="https://github.com/user-attachments/assets/7d3f0eae-8596-4adb-a05b-7f89bfd79ccf">
(Editing the Mesh in the MeshLibrary, the Mesh Transform has a y offset = 0; it should be 1.0)
<img width="1323" alt="Screenshot 2024-08-30 at 6 07 36โฏPM" src="https://github.com/user-attachments/assets/f41f3a68-07c9-4588-a7be-1231811efda2">
### Minimal reproduction project (MRP)
[mesh-library-export-apply-transform.zip](https://github.com/user-attachments/files/16822709/mesh-library-export-apply-transform.zip)
| bug,topic:editor,topic:import,topic:3d | low | Critical |
2,498,433,949 | vscode | Support mouse wheelScrollSensitivity in notebooks | Type: <b>Bug</b>
As of #117803, we are supposed to set the `workbench.list.mouseWheelScrollSensitivity` property to adjust the scrolling speed for Jupyter notebooks.
The problem here is that Jupyter notebooks are not the only thing using this property: it is also used in the Explorer and various other trees and widgets throughout the UI. This would not be a problem except that the Jupyter notebook plugin seems to interpret the value of this setting as much "slower" than the other widgets. For whatever reason, I need to set `workbench.list.mouseWheelScrollSensitivity = 2` to get Jupyter notebooks to scroll at a reasonable amount on my system. However, doing so makes scrolling in the file tree in the Explorer so fast that it's totally unusable.
I am not sure why this is. As some background, FWIW, for whatever reason, on my system (M1 macOS Sonoma) I needed to set all of the scroll wheel sensitivities lower than 1 to get things to look normal in VSCode. So I have the editor and workbench sensitivity set to around 0.3-0.5. This makes everything else scroll nicely, but causes Jupyter notebooks to scroll extremely slowly. Increase `workbench.list.mouseWheelScrollSensitivity` it throws a bunch of other stuff off.
Jupyter is almost unusable like this and it really needs its own mouseWheelScrollSensitivity property. Is there any workaround at all?
Extension version: 2024.7.0
VS Code version: Code 1.92.2 (Universal) (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Darwin arm64 23.5.0
Modes: Unsupported
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|5, 4, 5|
|Memory (System)|64.00GB (11.80GB free)|
|Process Argv|--crash-reporter-id cbedb2a0-9263-48d4-b145-0df5b8cb9512|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31104043
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-c:31111717
wkspc-ranged-t:31125599
autoexpandse:31127268
fje88620:31121564
aajjf12562:31125793
```
</details>
<!-- generated by issue reporter --> | feature-request,notebook | low | Critical |
2,498,437,287 | go | cmd/go: go build errors that don't correspond to source file until build cache is cleared | ### Go version
go version go1.23.0 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/root/.cache/go-build'
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/root/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/root/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/root/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='0'
GOMOD='/src/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2731747166=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
When attempting to build a valid go program, we are getting errors about imports on certain lines of source code that don't match the actual contents of the source file they are reported on. The errors go away and the program builds successfully once the go build cache at `/root/.cache/go-build` is deleted and nothing else changes.
The repro is quite complex and takes a long time to trigger, so the best I could do for now was create a docker image with the go toolchain, source code and go build cache in place that reproduces the problem. Including commands for reproducing with that image below.
For more context, we are hitting this in [Dagger](https://github.com/dagger/dagger), which is a container-based DAG execution engine that, among other things, does a lot of containerized building of Go code.
We specifically see this problem arise during integration tests, which will run, over the course of ~20min, many (probably 100+) `go build` executions in separate containers. The most relevant details I can think of:
1. All containers are using the same go toolchain version (1.23.0 currently) and the same base image
2. All containers have a shared bind mount for the go build cache (always mounted at `/root/.cache/go-build`) and the go mod cache (always mounted at `/go/pkg/mod`)
3. Source code is always mounted at `/src` and built with the command `go build -o /runtime .` from within that `/src` directory
* A lot of the source code will end up with similar and sometimes identical subpackages under `/src/internal`. They may also have the same go mod name at times.
4. Builds can happen in parallel and in serial across the integration test suite
5. The integration tests are quite heavy in terms of CPU usage and disk read/write bandwidth, the hosts are often under quite a bit of load
6. We don't do any manual fiddling around with the go build cache; we just run commands like `go build`, `go mod tidy`, etc. in containers
### What did you see happen?
As mentioned above, the best I could do for now was capture the state of one of the containers hitting this error in a docker image. I pushed the image to dockerhub at `eriksipsma/corrupt-cache:latest`. It's a `linux/amd64` only image unfortunately since that's what our CI is, which is the only place I can get this to happen consistently.
Trigger the go build error:
```
$ docker run --rm -it eriksipsma/corrupt-cache:latest sh -c '/usr/local/go/bin/go build -C /src .'
go: downloading go.opentelemetry.io/otel v1.27.0
go: downloading go.opentelemetry.io/otel/sdk v1.27.0
go: downloading go.opentelemetry.io/otel/trace v1.27.0
go: downloading github.com/99designs/gqlgen v0.17.49
go: downloading golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa
go: downloading github.com/Khan/genqlient v0.7.0
go: downloading golang.org/x/sync v0.7.0
go: downloading github.com/vektah/gqlparser/v2 v2.5.16
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.0.0-20240518090000-14441aefdf88
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.3.0
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0
go: downloading go.opentelemetry.io/otel/log v0.3.0
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0
go: downloading go.opentelemetry.io/otel/sdk/log v0.3.0
go: downloading go.opentelemetry.io/proto/otlp v1.3.1
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0
go: downloading google.golang.org/grpc v1.64.0
go: downloading github.com/go-logr/logr v1.4.1
go: downloading go.opentelemetry.io/otel/metric v1.27.0
go: downloading golang.org/x/sys v0.21.0
go: downloading google.golang.org/protobuf v1.34.1
go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20240515191416-fc5f0ca64291
go: downloading github.com/go-logr/stdr v1.2.2
go: downloading github.com/cenkalti/backoff/v4 v4.3.0
go: downloading github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0
go: downloading github.com/google/uuid v1.6.0
go: downloading github.com/sosodev/duration v1.3.1
go: downloading golang.org/x/net v0.26.0
go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20240520151616-dc85e6b867a5
go: downloading golang.org/x/text v0.16.0
internal/dagger/dagger.gen.go:23:2: package dagger/test/internal/querybuilder is not in std (/usr/local/go/src/dagger/test/internal/querybuilder)
internal/dagger/dagger.gen.go:24:2: package dagger/test/internal/telemetry is not in std (/usr/local/go/src/dagger/test/internal/telemetry)
```
The errors refer to the source file at `/src/internal/dagger/dagger.gen.go`. However, the imports it's erroring on are not the actual imports in the source file:
```
$ docker run --rm -it eriksipsma/corrupt-cache:latest head -n25 /src/internal/dagger/dagger.gen.go
// Code generated by dagger. DO NOT EDIT.
package dagger
import (
"context"
"encoding/json"
"errors"
"fmt"
"net"
"net/http"
"os"
"reflect"
"strconv"
"strings"
"github.com/Khan/genqlient/graphql"
"github.com/vektah/gqlparser/v2/gqlerror"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
"dagger/bare/internal/querybuilder"
"dagger/bare/internal/telemetry"
)
```
* Note that the errors refer to `dagger/test/internal/` but the actual imports in the source code are `dagger/bare/internal`
* **Also worth noting** that other containers do build source code with similar package layouts and contents *except* the import is `dagger/test/internal`. So it seems like `go build` here is somehow finding something in the cache from a previous build and incorrectly using it for this one.
The error goes away if you first clear the build cache and then run the same `go build` command:
```
$ docker run --rm -it eriksipsma/corrupt-cache:latest sh -c 'rm -rf /root/.cache/go-build && /usr/local/go/bin/go build -C /src .'
go: downloading go.opentelemetry.io/otel/sdk v1.27.0
go: downloading go.opentelemetry.io/otel/trace v1.27.0
go: downloading go.opentelemetry.io/otel v1.27.0
go: downloading github.com/99designs/gqlgen v0.17.49
go: downloading github.com/Khan/genqlient v0.7.0
go: downloading golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa
go: downloading golang.org/x/sync v0.7.0
go: downloading github.com/vektah/gqlparser/v2 v2.5.16
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.0.0-20240518090000-14441aefdf88
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.3.0
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0
go: downloading go.opentelemetry.io/otel/log v0.3.0
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0
go: downloading go.opentelemetry.io/otel/sdk/log v0.3.0
go: downloading go.opentelemetry.io/proto/otlp v1.3.1
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0
go: downloading google.golang.org/grpc v1.64.0
go: downloading github.com/go-logr/logr v1.4.1
go: downloading go.opentelemetry.io/otel/metric v1.27.0
go: downloading golang.org/x/sys v0.21.0
go: downloading google.golang.org/protobuf v1.34.1
go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20240515191416-fc5f0ca64291
go: downloading github.com/google/uuid v1.6.0
go: downloading github.com/sosodev/duration v1.3.1
go: downloading github.com/go-logr/stdr v1.2.2
go: downloading github.com/cenkalti/backoff/v4 v4.3.0
go: downloading github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0
go: downloading golang.org/x/net v0.26.0
go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20240520151616-dc85e6b867a5
go: downloading golang.org/x/text v0.16.0
```
### What did you expect to see?
For `go build` to not report errors that don't correspond to the source contents, and for the go build cache to not need to be cleared in order to get rid of the errors. | NeedsInvestigation,GoCommand | low | Critical |
2,498,445,737 | svelte | TypeError: Cannot read properties of null (reading 'nodeType') (Astro + Svelte 5) | ### Describe the bug
I am getting this error and my svelte component will not load at all. It works perfectly fine in dev environment but when I build or run preview it stops working and the error is thrown in the console.
### Reproduction
https://gist.github.com/AdrianButler/4b04009c797bf37599c6763d09111a0b
This component is being used in a astro file with the client directive of load
<FaqQuestion faqEntry={faqEntry} client:load />
This only happens in production build and the component works perfectly fine in dev environment
### Logs
```shell
faq/:1 [astro-island] Error hydrating /_astro/FaqQuestion.B9nkJrIu.js TypeError: Cannot read properties of null (reading 'nodeType')
at Ut (template.BZob6Wd_.js:1:8993)
at G (FaqQuestion.B9nkJrIu.js:1:1234)
at render.Eek01tEQ.js:1:1270
at Sn (template.BZob6Wd_.js:1:5360)
at U (template.BZob6Wd_.js:1:6097)
at k (template.BZob6Wd_.js:1:2186)
at bt (template.BZob6Wd_.js:1:2805)
at render.Eek01tEQ.js:1:1212
at Sn (template.BZob6Wd_.js:1:5360)
at U (template.BZob6Wd_.js:1:6097)
```
### System Info
```shell
System:
OS: Windows 11 10.0.22631
CPU: (24) x64 13th Gen Intel(R) Core(TM) i7-13700K
Memory: 17.34 GB / 31.77 GB
Binaries:
Node: 22.6.0 - ~\AppData\Roaming\JetBrains\WebStorm2024.2\node\versions\22.6.0\node.EXE
npm: 10.8.2 - ~\AppData\Roaming\JetBrains\WebStorm2024.2\node\versions\22.6.0\npm.CMD
Browsers:
Edge: Chromium (127.0.2651.74)
Internet Explorer: 11.0.22621.3527
```
### Severity
blocking all usage of svelte | awaiting submitter | low | Critical |
2,498,456,502 | tauri | [bug] Yew app crash on macOS arm64 | ### Describe the bug
I can run Tauri locally for a few tech stacks with no issue. I tried React/JavaScript and Leptos and they all work out of the gate. If I try to use the Yew version I get a crash.
<img width="1112" alt="SCR-20240830-shon" src="https://github.com/user-attachments/assets/5ab003bb-f8b3-44b5-8c8b-f5c77f0ece70">
### Reproduction
```
> $cargo install tauri-cli@^2.0.0-rc
> $ pnpm create tauri-app --rc
> $ pnpm create tauri-app --rc
โ Project name ยท heycast
โ Identifier ยท com.heycast.app
โ Choose which language to use for your frontend ยท Rust - (cargo)
โ Choose your UI template ยท Yew - (https://yew.rs/)
Template created! To get started run:
cd heycast
cargo tauri android init
cargo tauri ios init
For Desktop development, run:
cargo tauri dev
For Android development, run:
cargo tauri android dev
For iOS development, run:
cargo tauri ios dev
> $ cargo tauri -V
tauri-cli 2.0.0-rc.8
> $ cargo tauri dev
2024-08-31T00:08:31.234664Z INFO ๐ Starting trunk 0.20.2
2024-08-31T00:08:31.235040Z INFO โซ Found an update of trunk: 0.20.2 -> 0.20.3
2024-08-31T00:08:31.236523Z INFO ๐ฆ starting build
Warn Waiting for your frontend dev server to start on http://localhost:1420/...
...
```
### Expected behavior
Basic Yew Tauri app window is displayed.
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 14.4.1 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
โ cargo: 1.80.1 (376290515 2024-07-16)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 22.4.1
- pnpm: 9.4.0
- yarn: 1.22.21
- npm: 10.8.1
- bun: 1.1.20
[-] Packages
- tauri ๐ฆ: 2.0.0-rc.8
- tauri-build ๐ฆ: 2.0.0-rc.7
- wry ๐ฆ: 0.42.0
- tao ๐ฆ: 0.29.1
- tauri-cli ๐ฆ: 2.0.0-rc.8
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
```
```
### Stack trace
```text
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: heycast [27469]
Path: /Users/USER/*/heycast
Identifier: heycast
Version: ???
Code Type: ARM-64 (Native)
Parent Process: cargo-tauri [27325]
User ID: 501
Date/Time: 2024-08-30 21:03:41.0824 -0300
OS Version: macOS 14.4.1 (23E224)
Report Version: 12
Anonymous UUID: 95E3865B-6F4D-B84E-1BA5-6E22F31A919E
Sleep/Wake UUID: 81F5FEEC-0300-4540-A467-F884A16835C7
Time Awake Since Boot: 1100000 seconds
Time Since Wake: 354284 seconds
System Integrity Protection: enabled
Notes:
Extracting libpas PGM metadata failed.
Crashed Thread: 0 main Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: UNKNOWN_0x101 at 0x000000000bad4007
Exception Codes: 0x0000000000000101, 0x000000000bad4007
Termination Reason: Namespace SIGNAL, Code 10 Bus error: 10
Terminating Process: exc handler [27469]
VM Region Info: 0xbad4007 is not in any region. Bytes before following region: 4146348025
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 102d18000-10429c000 [ 21.5M] r-x/r-x SM=COW /Users/USER/*/heycast
Error Formulating Crash Report:
Extracting libpas PGM metadata failed.
Thread 0 Crashed:: main Dispatch queue: com.apple.main-thread
0 ??? 0xbad4007 ???
1 ImageIO 0x19bca90b8 IIOReadPlugin::callInitialize() + 584
2 ImageIO 0x19bca8dbc IIO_Reader::initImageAtOffset(CGImagePlugin*, unsigned long, unsigned long, unsigned long) + 128
3 ImageIO 0x19bca675c IIOImageSource::makeImagePlus(unsigned long, IIODictionary*) + 816
4 ImageIO 0x19bca6030 IIOImageSource::getPropertiesAtIndexInternal(unsigned long, IIODictionary*) + 72
5 ImageIO 0x19bca5f4c IIOImageSource::copyPropertiesAtIndex(unsigned long, IIODictionary*) + 24
6 ImageIO 0x19bca5df0 CGImageSourceCopyPropertiesAtIndex + 332
7 AppKit 0x194b5abf8 ImageSourceOptionsForCGImageSource_index_ + 64
8 AppKit 0x194b5aa78 +[NSBitmapImageRep _imagesWithData:hfsFileType:extension:zone:expandImageContentNow:includeAllReps:] + 428
9 AppKit 0x194c721ec +[NSBitmapImageRep imageRepsWithData:] + 68
10 AppKit 0x194c71abc -[NSImage initWithData:] + 76
11 heycast 0x1037e20a0 _$LT$$LP$A$C$$RP$$u20$as$u20$objc..message..MessageArguments$GT$::invoke::h642393208245ecf8 + 112
12 heycast 0x1037e9e58 objc::message::platform::send_unverified::_$u7b$$u7b$closure$u7d$$u7d$::ha6b00eb6dc604991 + 64
13 heycast 0x1037e5d98 objc_exception::try::_$u7b$$u7b$closure$u7d$$u7d$::h06d29c39dd259369 + 44
14 heycast 0x1037e497c objc_exception::try_no_ret::try_objc_execute_closure::h1edbe88ed1511a57 + 84
15 heycast 0x1037f3820 RustObjCExceptionTryCatch + 36
16 heycast 0x1037e3904 objc_exception::try_no_ret::h0cedd85773836f12 + 160
17 heycast 0x1037e5170 objc_exception::try::h2c12f23923cf3a29 + 72
18 heycast 0x1037eaeb0 objc::exception::try::hef6f47604fe281a9 + 12
19 heycast 0x1037e8ec8 objc::message::platform::send_unverified::hff66ef939bf0cd85 + 152
20 heycast 0x1037ccde0 _$LT$$BP$mut$u20$objc..runtime..Object$u20$as$u20$cocoa..appkit..NSImage$GT$::initWithData_::hc0e2ea75912bebc9 + 304
21 heycast 0x10329e52c tauri::app::on_event_loop_event::h9dc43a8a7691afe0 + 2260
22 heycast 0x10329913c tauri::app::App$LT$R$GT$::run::_$u7b$$u7b$closure$u7d$$u7d$::h1a9075821abe897d + 716
23 heycast 0x103024ab0 tauri_runtime_wry::handle_event_loop::h4e3cdd234b196e92 + 1064
24 heycast 0x103028790 _$LT$tauri_runtime_wry..Wry$LT$T$GT$$u20$as$u20$tauri_runtime..Runtime$LT$T$GT$$GT$::run::_$u7b$$u7b$closure$u7d$$u7d$::ha19c8d2e572f545b + 812
25 heycast 0x103457f60 _$LT$tao..platform_impl..platform..app_state..EventLoopHandler$LT$T$GT$$u20$as$u20$tao..platform_impl..platform..app_state..EventHandler$GT$::handle_nonuser_event::_$u7b$$u7b$closure$u7d$$u7d$::h5993903e620f01b3 + 652
26 heycast 0x103458370 tao::platform_impl::platform::app_state::EventLoopHandler$LT$T$GT$::with_callback::hfc43e193d0d9bb85 + 416
27 heycast 0x103457cc8 _$LT$tao..platform_impl..platform..app_state..EventLoopHandler$LT$T$GT$$u20$as$u20$tao..platform_impl..platform..app_state..EventHandler$GT$::handle_nonuser_event::hcaa4acc0266ef78c + 64
28 heycast 0x10366df54 tao::platform_impl::platform::app_state::Handler::handle_nonuser_event::h3ecaf91d511fc83b + 868
29 heycast 0x10366ea4c tao::platform_impl::platform::app_state::AppState::launched::h939fbb8bc9862ebc + 456
30 heycast 0x1036b6148 tao::platform_impl::platform::app_delegate::did_finish_launching::hf349c921e52efe2d + 80
31 CoreFoundation 0x19120ab1c __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 148
32 CoreFoundation 0x19129edb8 ___CFXRegistrationPost_block_invoke + 88
33 CoreFoundation 0x19129ed00 _CFXRegistrationPost + 440
34 CoreFoundation 0x1911d9648 _CFXNotificationPost + 768
35 Foundation 0x1922f5464 -[NSNotificationCenter postNotificationName:object:userInfo:] + 88
36 AppKit 0x194a7437c -[NSApplication _postDidFinishNotification] + 284
37 AppKit 0x194a7412c -[NSApplication _sendFinishLaunchingNotification] + 172
38 AppKit 0x194a72674 -[NSApplication(NSAppleEventHandling) _handleAEOpenEvent:] + 504
39 AppKit 0x194a72270 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 492
40 Foundation 0x19231d914 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 316
41 Foundation 0x19231d708 _NSAppleEventManagerGenericHandler + 80
42 AE 0x19822e9c4 0x198223000 + 47556
43 AE 0x19822e2ec 0x198223000 + 45804
44 AE 0x1982278a8 aeProcessAppleEvent + 488
45 HIToolbox 0x19b9bee90 AEProcessAppleEvent + 68
46 AppKit 0x194a6cc7c _DPSNextEvent + 1440
47 AppKit 0x19525edec -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 700
48 AppKit 0x194a5fcb8 -[NSApplication run] + 476
49 heycast 0x1037eb1a4 _$LT$$LP$$RP$$u20$as$u20$objc..message..MessageArguments$GT$::invoke::h02787475dd222354 + 72
50 heycast 0x1037ea4b0 objc::message::platform::send_unverified::_$u7b$$u7b$closure$u7d$$u7d$::hd907dadf8d1e20d7 + 60
51 heycast 0x1037e6480 objc_exception::try::_$u7b$$u7b$closure$u7d$$u7d$::h800ccf6a7766e113 + 44
52 heycast 0x1037e4f3c objc_exception::try_no_ret::try_objc_execute_closure::he69d457538489a2e + 76
53 heycast 0x1037f3820 RustObjCExceptionTryCatch + 36
54 heycast 0x1037e3edc objc_exception::try_no_ret::h6a82b812d258b81f + 144
55 heycast 0x1037e50ac objc_exception::try::h152ffc32e423669e + 72
56 heycast 0x1037ead88 objc::exception::try::h4ba74bc4b99e6bc6 + 12
57 heycast 0x1037e8bb8 objc::message::platform::send_unverified::he7dee9b5e469d0a7 + 136
58 heycast 0x103442dbc tao::platform_impl::platform::event_loop::EventLoop$LT$T$GT$::run_return::hf414d157b9fea638 + 1052
59 heycast 0x103443a14 tao::platform_impl::platform::event_loop::EventLoop$LT$T$GT$::run::ha29daad39ae6015e + 20
60 heycast 0x1031d8f18 tao::event_loop::EventLoop$LT$T$GT$::run::h6421474a517c4fd8 + 60
61 heycast 0x103028334 _$LT$tauri_runtime_wry..Wry$LT$T$GT$$u20$as$u20$tauri_runtime..Runtime$LT$T$GT$$GT$::run::h83540f765adf2d0f + 460
62 heycast 0x103298e14 tauri::app::App$LT$R$GT$::run::h8a5d48c22889a2d2 + 372
63 heycast 0x103299730 tauri::app::Builder$LT$R$GT$::run::hb952369b7433d295 + 104
64 heycast 0x103227a38 heycast_lib::run::he90b2fe8a5ec40aa + 652
65 heycast 0x102d1c524 heycast::main::hfa4ac8e4e2e94c0e + 12 (main.rs:5)
66 heycast 0x102d1c604 core::ops::function::FnOnce::call_once::h7e8152ac2a25b446 + 20 (function.rs:250)
67 heycast 0x102d1c570 std::sys_common::backtrace::__rust_begin_short_backtrace::hd0c643f86163099e + 24 (backtrace.rs:155)
68 heycast 0x102d1c6a8 std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::hca57a0c5e7da6495 + 28 (rt.rs:159)
69 heycast 0x103b2a494 std::rt::lang_start_internal::h27a134f18d582a1e + 640
70 heycast 0x102d1c674 std::rt::lang_start::h136c9ee7e14788aa + 84 (rt.rs:158)
71 heycast 0x102d1c550 main + 36
72 dyld 0x190dae0e0 start + 2360
Thread 1:
0 libsystem_pthread.dylib 0x191131d20 start_wqthread + 0
<...>
Model: Mac15,9, BootROM 10151.101.3, proc 16:12:4 processors, 48 GB, SMC
Graphics: Apple M3 Max, Apple M3 Max, Built-In
Display: Odyssey G8, 6720 x 3780, Main, MirrorOff, Online
Display: MB16QHG, 1600 x 2560, MirrorOff, Online
Display: Yam Display, 2732 x 2048, MirrorOff, Online
Memory Module: LPDDR5, Micron
AirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4388), wl0: Jan 13 2024 06:19:30 version 23.30.42.0.41.51.132 FWID 01-5ba6bbe8
AirPort:
Bluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports
Network Service: Thunderbolt Ethernet Slot 0, Ethernet, en8
Network Service: Wi-Fi, AirPort, en0
Network Service: iPad, Ethernet, en11
Network Service: Tailscale, VPN (io.tailscale.ipn.macos), utun4
PCI Card: ethernet, Ethernet Controller, Thunderbolt@3,0,0
USB Device: USB31Bus
USB Device: USB3.0 Hub
USB Device: TS4 USB3.2 Gen2 HUB
USB Device: TS4 USB3.2 Gen2 HUB
USB Device: USB3.2 Hub
USB Device: TS4 USB3.2 Gen2 HUB
USB Device: iPad
USB Device: TS4 USB2.0 Hub
USB Device: TPS DMC Family
USB Device: TS4 USB2.0 HUB
USB Device: composite_device
USB Device: MagSafe Charging Case
USB Device: TS4 USB2.0 HUB
USB Device: USB Receiver
USB Device: USB2.1 Hub
USB Device: TS4 USB2.0 HUB
USB Device: Creative Pebble Pro
USB Device: USB31Bus
USB Device: USB31Bus
Thunderbolt Bus: MacBook Pro, Apple Inc.
Thunderbolt Bus: MacBook Pro, Apple Inc.
Thunderbolt Bus: MacBook Pro, Apple Inc.
Thunderbolt Device: TS4, CalDigit, Inc., 1, 39.1
```
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,498,458,018 | vscode | [Accessibility] Tooltips under user settings for extensions missing aria attributes. | **Issue**: The "Applies to all profiles" button which opens the tooltip is missing role information and aria-describedby attribute.
**Impact**: Screen reader users will be unable to determine that these tooltips are present.
[Code Reference]
`<span class="setting-item-overrides setting-indicator" tabindex="0" style="display: inline;">Applies to all profiles</span>`
**Steps to repro:**
1. Open VScode Editor
2. Go to User settings -> For any extension setting, check for a tooltip like "Applies to all Profiles"
3. The Accessibility screen reader doesn't announce the presence of a **ToolTip** inside the Toolbar.
For tooltips, the following information is expected:
- The control that opens the tooltip must have aria-describedby set to the ID of the tooltip element.
- The tooltip element must have role="tooltip".
I believe the Tooltip is being rendered by VSCode due to the `Scope` property being added in `contributes.configuration`
Reference: https://code.visualstudio.com/api/references/contribution-points#contributes.configuration
Screenshot of issue:
<img width="1784" alt="Screenshot 2024-09-02 at 9 58 39โฏPM" src="https://github.com/user-attachments/assets/9b466341-7424-44ed-928c-368824f2959c">
| bug,accessibility,settings-editor | low | Minor |
2,498,460,540 | godot | Engine.time_scale affects the display of button presses while using shortcuts | ### Tested versions
v4.3.stable.official [77dcf97d8] and v4.4.dev1.official [28a72fa43]
### System information
Debian GNU/Linux 12 (bookworm) 12 - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 470 Graphics (RADV POLARIS10) - AMD Phenom(tm) II X6 1090T Processor (6 Threads)
### Issue description
While switching buttons (toggle mode, in the same button group) with keyboard shortcuts `Engine.time_scale` value affects time between select the new button and unselect the old one. Problem do not occurs while use mouse to activated buttons (but buttons previously activated by shortcuts remain displayed as active).
### Steps to reproduce
`bug.tscn`:
- define 3 toggle buttons in the same button group, 2 with keyboard shortcuts
- set `Engine.time_scale = 0.05` on startup
usage:
0. use mouse to click on buttons โ everything works OK
1. use keyboard shortcuts (`a` / `b`) to switch between `AAA` and `BBB` buttons โ both buttons will be shown as selected
2. use mouse to select button `CCC` โ all buttons will be show as selected
3. use mouse to select button `AAA` โ button `AAA` and `BBB` will be show as selected
4. wait some time โ only button `AAA` will be show as selected
- decreasing `Engine.time_scale` value increases this waiting time
### Minimal reproduction project (MRP)
[Archive.zip](https://github.com/user-attachments/files/16822805/Archive.zip)
| bug,topic:input,topic:gui | low | Critical |
2,498,487,788 | terminal | Add an option to revert to old alpha-based selection colors | In v1.22, selection colors were changed significantly to make the selection have uniform foreground and background color. However, the new colors are imo hard to read due to excessive contrast and they interact poorly with text with varying background and foreground color (see image 4 and 5). Also, using glyphs to create rounded corners on blocks with different background colors breaks with the new selection behavior (see image 3).
Personally, I much prefer the previous alpha-based selection. Could you add an option to revert to the previous behavior?
v1.21, the pretty alpha-based selection:

v1.22, too much contrast, imo does not look as good:

---
Bad rendering for rounded corners using glyphs:

---
v1.22 hides all background colors:

v1.21 preserves background colors, which imo looks better:

| Area-Rendering,Area-Settings,Product-Terminal,Issue-Task | low | Minor |
2,498,548,038 | pytorch | `weight` argument of `nn.CrossEntropyLoss()` works with `int`, `complex` and `bool` type | ### ๐ Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) of `nn.CrossEntropyLoss()` says `weight` argument has to be `float` as shown below:
> - weight ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor), optional) โ a manual rescaling weight given to each class. If given, has to be a Tensor of size C and floating point dtype
But, `weight` argument works with `int`, `complex` and `bool` type as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([5., 3., 8.])
tensor2 = torch.tensor([2., 7., 1.])
cel = nn.CrossEntropyLoss(weight=torch.tensor([0, 1, 2])) # Here
cel(input=tensor1, target=tensor2)
# tensor(35.4949)
cel = nn.CrossEntropyLoss(weight=torch.tensor([0.+0.j, 1.+0.j, 2.+0.j])) # Here
cel(input=tensor1, target=tensor2)
# tensor(35.4949-0.j)
cel = nn.CrossEntropyLoss(weight=torch.tensor([True, False, True])) # Here
cel(input=tensor1, target=tensor2)
# tensor(6.1650)
```
### Versions
```python
import torch
torch.__version__ # 2.4.0+cu121
```
cc @svekars @brycebortree @tstatler | module: docs,triaged | low | Critical |
2,498,577,756 | pytorch | Segmentation fault would be triggered when using `torch.jit.script` and `torch._C._jit_shape_compute_graph_for_node`. | ### ๐ Describe the bug
Segmentation fault would be triggered when using `torch.jit.script` and `torch._C._jit_shape_compute_graph_for_node`. The code is as follows:
```python
import torch
@torch.jit.script
def foo(x, y):
return ()
mul_node = foo.graph.findNode('aten::mul')
mul_graph = torch._C._jit_shape_compute_graph_for_node(mul_node)
```
> Segmentation fault (core dumped)
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` . Please find the colab [here](https://colab.research.google.com/drive/1IA_nMq9bal5ZlheIiJ7eyXWY6c9-Im2I?usp=sharing).
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240815+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cu121
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cu121 pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,498,602,347 | pytorch | How to calculate second derivative using PyTorch with GPU (cuda) | ### ๐ The feature, motivation and pitch
I have a python code segment related to a deep RL algorithm where it calculates the second order optimization and second derivative with Hessian matrix and fisher information matrix. Normally I run the whole code on GPU (cuda), but since I got a computational issue to calculate second derivative in cuda,
```
NotImplementedError: the derivative for '_cudnn_rnn_backward' is not implemented. Double backwards is not supported for CuDNN RNNs due to limitations in the CuDNN API. To run double backwards, please disable the CuDNN backend temporarily while running the forward pass of your RNN. For example:
with torch.backends.cudnn.flags(enabled=False):
output = model(inputs)
```
I had to move to CPU for this code segment, and now the code is executing sequentially instead of in parallel, which takes a long time to run:
```
grads = torch.autograd.grad(policy_loss, self.policy.Actor.parameters(), retain_graph=True)
loss_grad = torch.cat([grad.view(-1) for grad in grads])
def Fvp_fim(v = -loss_grad):
with torch.backends.cudnn.flags(enabled=False):
M, mu, info = self.policy.Actor.get_fim(states_batch)
#pdb.set_trace()
mu = mu.view(-1)
filter_input_ids = set([info['std_id']])
t = torch.ones(mu.size(), requires_grad=True, device=mu.device)
mu_t = (mu * t).sum()
Jt = compute_flat_grad(mu_t, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True)
Jtv = (Jt * v).sum()
Jv = torch.autograd.grad(Jtv, t)[0]
MJv = M * Jv.detach()
mu_MJv = (MJv * mu).sum()
JTMJv = compute_flat_grad(mu_MJv, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True).detach()
JTMJv /= states_batch.shape[0]
std_index = info['std_index']
JTMJv[std_index: std_index + M.shape[0]] += 2 * v[std_index: std_index + M.shape[0]]
return JTMJv + v * self.damping
```
Above is the main function, where it calculates the second derivative. below are the supportive functions and relevant classes it has used.
```
def compute_flat_grad(output, inputs, filter_input_ids=set(), retain_graph=True, create_graph=False):
if create_graph:
retain_graph = True
inputs = list(inputs)
params = []
for i, param in enumerate(inputs):
if i not in filter_input_ids:
params.append(param)
grads = torch.autograd.grad(output, params, retain_graph=retain_graph, create_graph=create_graph, allow_unused=True)
j = 0
out_grads = []
for i, param in enumerate(inputs):
if (i in filter_input_ids):
out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype))
else:
if (grads[j] == None):
out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype))
else:
out_grads.append(grads[j].view(-1))
j += 1
grads = torch.cat(out_grads)
for param in params:
param.grad = None
return grads
------
import torch
import torch.nn as nn
from agents.models.feature_extracter import LSTMFeatureExtractor
from agents.models.policy import PolicyModule
from agents.models.value import ValueModule
class ActorNetwork(nn.Module):
def __init__(self, args):
super(ActorNetwork, self).__init__()
self.FeatureExtractor = LSTMFeatureExtractor(args)
self.PolicyModule = PolicyModule(args)
def forward(self, s):
lstmOut = self.FeatureExtractor.forward(s)
mu, sigma, action, log_prob = self.PolicyModule.forward(lstmOut)
return mu, sigma, action, log_prob
def get_fim(self, x):
mu, sigma, _, _ = self.forward(x)
if sigma.dim() == 1:
sigma = sigma.unsqueeze(0)
cov_inv = sigma.pow(-2).repeat(x.size(0), 1)
param_count = 0
std_index = 0
id = 0
std_id = id
for name, param in self.named_parameters():
if name == "sigma.weight":
std_id = id
std_index = param_count
param_count += param.view(-1).shape[0]
id += 1
return cov_inv.detach(), mu, {'std_id': std_id, 'std_index': std_index}
```
In the bigger picture there are large amounts of batches going through this function, since all of 'em have to go sequentially through this function, it highly increases the total running time. Is there a possible way to calculate the second derivative with Pytorch while running on cuda/GPU?
### Alternatives
_No response_
### Additional context
_No response_
cc @csarofeen @ptrblck @xwang233 @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @mikaylagawarecki @zou3519 @Chillee @samdow @kshitij12345 | module: double backwards,module: cudnn,module: autograd,module: rnn,triaged,module: functorch | low | Critical |
2,498,612,163 | tensorflow | Aborted (core dumped): Check failed: d < dims() (1 vs. 1) | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I encountered an `aborted issue` in TensorFlow when I used API `array_ops.scatter_nd` . The code is as follows:
```python
import numpy as np
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
GRADIENT_TESTS_DTYPES = (dtypes.bfloat16, dtypes.float16, dtypes.float32, dtypes.float64)
def scatter_nd(indices, updates, shape):
return array_ops.scatter_nd(indices, updates, shape)
def testExtraIndicesDimensions():
indices = array_ops.zeros((1, 1, 2), dtypes.int32)
updates = array_ops.zeros([1], dtypes.int32)
shape = np.array((2, 2))
scatter = scatter_nd(indices, updates, shape)
testExtraIndicesDimensions()
```
> 2024-08-31 12:17:41.010855: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (1 vs. 1)
> Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
GRADIENT_TESTS_DTYPES = (dtypes.bfloat16, dtypes.float16, dtypes.float32, dtypes.float64)
def scatter_nd(indices, updates, shape):
return array_ops.scatter_nd(indices, updates, shape)
def testExtraIndicesDimensions():
indices = array_ops.zeros((1, 1, 2), dtypes.int32)
updates = array_ops.zeros([1], dtypes.int32)
shape = np.array((2, 2))
scatter = scatter_nd(indices, updates, shape)
testExtraIndicesDimensions()
```
### Relevant log output
```shell
2024-08-31 12:17:41.010855: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (1 vs. 1)
Aborted (core dumped)
```
I have confirmed that above code would crash on `tf-nightly 2.18.0.dev20240817` (nightly-build) | stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,498,640,369 | tensorflow | tensorflow.python.ops.state_ops.scatter_nd_update can cause a crash | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I encountered a `Segmentation fault` in TensorFlow when I used API `state_ops.scatter_nd_update` with empty indices.
I have confirmed that the code would crash on `tf-nightly 2.18.0.dev20240817` (nightly-build)
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.ops import state_ops
def testSimpleResource():
indices = constant_op.constant([], dtype=dtypes.int32)
for dtype in (dtypes.int32, dtypes.bfloat16):
updates = constant_op.constant([], dtype=dtype)
ref = resource_variable_ops.ResourceVariable((0, 0, 0, 0, 0, 0, 0, 0), dtype=dtype)
scatter = state_ops.scatter_nd_update(ref, indices, updates)
testSimpleResource()
```
### Relevant log output
```shell
> Segmentation fault (core dumped)
```
| stat:awaiting tensorflower,type:support,comp:ops,2.17 | low | Critical |
2,498,656,182 | PowerToys | Rename File Locksmith to File & Folder Locksmith | ### Description of the new feature / enhancement
Rename File Locksmith to File & Folder Locksmith.
### Scenario when this would be used?
This would be used to tell that File Locksmith can also unlock folders.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,498,664,107 | go | cmd/go: do not apply a kill timeout to 'go test' if -bench was set | ### Go version
go version go1.23.0 linux/amd64
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
The code doesn't really matter, but for the sake of illustration suppose we have this benchmark:
```
func Benchmark100MS(b *testing.B) {
for i := 0; i < b.N; i++ {
time.Sleep(100 * time.Millisecond)
}
}
```
(You can test this with any benchmark.) Consider the following `go test` invocations:
1. `go test -bench . -benchtime 70s -timeout 1s`
2. `go test -c -o x.test ./x/x_test.go && ./x.test -test.bench . -test.benchtime 70s -test.timeout 1s`
### What did you see happen?
The tests behave differently. When I run the test using `go test`, it kills the test after 61s:
```
$ go test -bench 100MS -benchtime 70s -timeout 1s ./x/x_test.go
goos: linux
goarch: amd64
cpu: AMD Ryzen 9 3900X 12-Core Processor
Benchmark100MS-24 SIGQUIT: quit
PC=0x475921 m=0 sigcode=0
goroutine 0 gp=0x66bdc0 m=0 mp=0x66cc60 [idle]:
runtime.futex(0x66cda0, 0x80, 0x0, 0x0, 0x0, 0x0)
...
*** Test killed with quit: ran too long (1m1s).
exit status 2
FAIL command-line-arguments 61.073s
FAIL
```
When I run the test binary, the `-test.timeout` flag has no effect and the benchmark runs to completion:
```
$ go test -c -o x.test ./x/x_test.go && time ./x.test -test.bench . -test.benchtime 70s -test.timeout 1s
goos: linux
goarch: amd64
cpu: AMD Ryzen 9 3900X 12-Core Processor
Benchmark100MS-24 836 100261554 ns/op
PASS
./x.test -test.bench . -test.benchtime 70s -test.timeout 1s 0.03s user 0.08s system 0% cpu 1:33.95 total
```
### What did you expect to see?
I would expect the program to behave the same way whether invoked via `go test` or compiled and run directly.
What's going on here is a bit of a disagreement between the testing package and `go test`.
The testing package [applies the timeout to tests, fuzz tests, and examples, but not benchmarks](https://github.com/golang/go/blob/894ead51c5fe1c2a0c6b0bca473177c2b5f0f137/src/testing/testing.go#L2117-L2130). This is very much intentional; the old issue #18845 flagged a regression in this behavior and it was fixed in 470704531d93d1bcc24493abea882f99593bcac6. [The regression test](https://github.com/golang/go/blob/894ead51c5fe1c2a0c6b0bca473177c2b5f0f137/src/cmd/go/testdata/script/test_benchmark_timeout.txt) is still around: it checks that running a 1s benchmark using `-timeout 750ms` should pass.
`go test` [sets its own timer](https://github.com/golang/go/blob/894ead51c5fe1c2a0c6b0bca473177c2b5f0f137/src/cmd/go/internal/test/test.go#L799-L835) to kill stuck tests. That timer is set unless `-test.timeout` is 0 (disabled) or if `-test.fuzz` is set. The `go test` timer is set to the value of the test timeout plus one minute, since it expects the test to time itself out normally.
So there is an inconsistency here.
I think we should resolve the inconsistency by changing `go test` not to time out test binaries if `-test.bench` was set, as it already does for `-test.fuzz`. The testing package clearly goes out of its way to avoid having `-timeout` apply to benchmarks; `go test` should respect that.
This certainly comes up in practice. I work on a pretty large Go codebase. The 10m default test timeout works fine for us; our tests are generally plenty fast. But when I run benchmarks across lots of packages (this is common as part of the sanity checks when doing Go upgrades, for example) I usually forget to set `-timeout` and then after 11m my benchmarks are killed. (The odd 11m number is the 10m default timeout value plus `go test`'s extra minute.)
I noticed this when thinking about the related issue #48157 which is about adding per-test timeouts.
| NeedsFix,GoCommand | low | Major |
2,498,664,659 | tensorflow | tensorflow.python.ops.gen_math_ops.sparse_bincount can cause a crash | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240817
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I encountered a `Segmentation fault ` issue in TensorFlow when using the `gen_math_ops.sparse_bincount` API. I have confirmed that the code would crash on `tf-nightly 2.18.0.dev20240817` (nightly-build)
### Standalone code to reproduce the issue
```shell
from tensorflow.python.ops import gen_math_ops
values = [0, 1, 2, 2]
binary = False
indices = [[], [], [990, 2], [2, 349]]
dense_shape = []
gen_math_ops.sparse_bincount(indices=indices, values=values, dense_shape=dense_shape, size=3, weights=[],binary_output=binary)
```
### Relevant log output
```shell
Segmentation fault (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,498,670,507 | go | net/http: TestTransportMaxConnsPerHostDialCancellation/h1 failures | ```
#!watchflakes
default <- pkg == "net/http" && test == "TestTransportMaxConnsPerHostDialCancellation/h1"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738119239626546913)):
=== RUN TestTransportMaxConnsPerHostDialCancellation/h1
transport_test.go:767: expected error context canceled, got <nil>
--- FAIL: TestTransportMaxConnsPerHostDialCancellation/h1 (0.00s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,498,701,090 | go | go/ast: SortImports crashes with invalid line number panic | Reproducer:
```go
func TestFormatCrash(t *testing.T) {
const src = `package main
import(
"a"//t
"a")
`
fs := token.NewFileSet()
f, err := parser.ParseFile(fs, "test.go", src, parser.ParseComments|parser.SkipObjectResolution)
if err != nil {
t.Fatal(err)
}
ast.SortImports(fs, f)
}
```
```
panic: invalid line number 4 (should be < 4) [recovered]
panic: invalid line number 4 (should be < 4)
goroutine 6 [running]:
testing.tRunner.func1.2({0x5842c0, 0xc0000263b0})
testing/testing.go:1706 +0x230
testing.tRunner.func1()
testing/testing.go:1709 +0x35e
panic({0x5842c0?, 0xc0000263b0?})
runtime/panic.go:785 +0x132
go/token.(*File).MergeLine(0xc0000322a0, 0x20?)
go/token/position.go:157 +0x1c8
go/ast.sortSpecs(0xc000110a00, 0xc000100280, {0xc000078080, 0x2, 0x2})
go/ast/import.go:200 +0xa36
go/ast.SortImports(0xc000110a00, 0xc000100280)
go/ast/import.go:40 +0x41d
```
SortImports is used by format.Node, so this issue also affects the go/format package. | NeedsInvestigation | low | Critical |
2,498,704,463 | pytorch | [torch.jit] Crash would be raised when using TorchScript modules | ### ๐ Describe the bug
Segmentation fault would be triggered when using TorchScript modules, particularly in the context of saving, loading, and dealing with interface changes and named tuple definitions. The code is as follows:
```python
import io
from typing import NamedTuple
import torch
from torch import Tensor
from torch.testing._internal.jit_utils import clear_class_registry
def script_module_to_buffer(script_module):
module_buffer = io.BytesIO(script_module._save_to_buffer_for_lite_interpreter(_use_flatbuffer=True))
module_buffer.seek(0)
return module_buffer
class MyCoolNamedTuple(NamedTuple):
a: int
@torch.jit.interface
class MyInterface:
def bar(self, x: Tensor) -> Tensor:
pass
@torch.jit.script
class ImplementInterface:
def __init__(self) -> None:
pass
def bar(self, x):
return x
class Foo(torch.nn.Module):
interface: MyInterface
def __init__(self) -> None:
super().__init__()
self.foo = torch.nn.Linear(2, 2)
self.bar = torch.nn.Linear(2, 2)
self.interface = ImplementInterface()
def forward(self, x):
x = self.foo(x)
x = self.bar(x)
x = self.interface.bar(x)
return [x, MyCoolNamedTuple(a=5)]
first_script_module = torch.jit.script(Foo())
first_saved_module = script_module_to_buffer(first_script_module)
clear_class_registry()
@torch.jit.interface
class MyInterface:
def not_bar(self, x: Tensor) -> Tensor:
pass
@torch.jit.script
class ImplementInterface:
def __init__(self) -> None:
pass
def not_bar(self, x):
return x
class MyCoolNamedTuple(NamedTuple):
a: str
class Foo(torch.nn.Module):
interface: MyInterface
def __init__(self) -> None:
super().__init__()
self.foo = torch.nn.Linear(2, 2)
self.interface = ImplementInterface()
def forward(self, x):
return x
second_script_module = torch.jit.script(Foo())
second_saved_module = script_module_to_buffer(second_script_module)
clear_class_registry()
class ContainsBoth(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.add_module('second', torch.jit.load(second_saved_module))
self.add_module('first', torch.jit.load(first_saved_module))
def forward(self, x):
return x
sm = torch.jit.script(ContainsBoth())
contains_both = script_module_to_buffer(sm)
sm = torch.jit.load(contains_both)
```
Error messages:
```shell
Fail to import hypothesis in common_utils, tests are not derandomized
[W831 15:30:16.414731168 ir_emitter.cpp:4523] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[Tensor, __torch__.MyCoolNamedTuple]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement.
File "/data/test.py", line 49
x = self.bar(x)
x = self.interface.bar(x)
return [x, MyCoolNamedTuple(a=5)]
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
(function emitListLiteral)
Segmentation fault (core dumped)
```
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` .
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torchaudio==2.4.0.dev20240815+cpu
[pip3] torchvision==0.20.0.dev20240815+cpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cpu pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,498,705,488 | ui | [bug]: when i install block components, it is shown without css ,how can i adjust it? | ### Describe the bug
what really it is:

what it is shown in preview:

### Affected component/components
dashboard
### How to reproduce
just install it with npx shadcn@latest add "https://v0.dev/chat/b/Uwq1Bdy?token=eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..vf0EN8N8BtlZ-1Dz.wEcYc_vsXQ-u3-zykGUntD-z2hQGIjzmKfSvS362PWpGsoJP20Q.LtGCZju8ZjvsoamTnjsYwg"
### Codesandbox/StackBlitz link
npx shadcn@latest add "https://v0.dev/chat/b/Uwq1Bdy?token=eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..vf0EN8N8BtlZ-1Dz.wEcYc_vsXQ-u3-zykGUntD-z2hQGIjzmKfSvS362PWpGsoJP20Q.LtGCZju8ZjvsoamTnjsYwg"
### Logs
_No response_
### System Info
```bash
none
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,498,706,024 | godot | Subviewport content cannot be rendered correctly on MacOS Chrome after exporting to html5 | ### Tested versions
- Reproducible in Godot 4.2.stable๏ผ4.3.stable
### System information
MacOS 14.1(M3 Pro) - Godot v4.3.stable - Chrome 128.0.6613.114 (arm 64)
### Issue description
When I use `Subviewport` to draw content, whether it is through `SubviewportContainer` or `ViewportTexture`, after exporting to HTML5, it cannot be rendered normally on **Chrome**.
some additional information:
- When using Godot4.2.stable to export **HTML5**, it cannot be rendered normally on **Safari 17.1** too.
- It seems that the two settings of *Render Target (Clear Mode, Update Mode)* on the Subviewport Inspect panel do not take effect.
- If MRP is uploaded to **itchio**, it will be rendered correctly on Chrome.
- It seems OK when html5 running on Chrome in **Windows**.
Scene Node Tree:
<img width="263" alt="ๆชๅฑ2024-08-31 15 05 12" src="https://github.com/user-attachments/assets/f82d9f94-4dc3-44b7-abae-5128a809e310">
Inspect pannel:
<img width="238" alt="ๆชๅฑ2024-08-31 15 14 13" src="https://github.com/user-attachments/assets/2b9e44fa-31ea-4c1a-8568-afcddb9e2462">
Expect View:
<img width="587" alt="ๆชๅฑ2024-08-31 15 05 29" src="https://github.com/user-attachments/assets/9c34d781-de8c-44b7-b17a-51d6444b1c8e">
View in Chrome:
<img width="663" alt="ๆชๅฑ2024-08-31 15 05 57" src="https://github.com/user-attachments/assets/26943f4e-cc26-4a1f-ad2b-458138e37d82">
### Steps to reproduce
Just add Subviewport to Scene and use Remote Debug - Run in Browser
<img width="297" alt="ๆชๅฑ2024-08-31 15 22 35" src="https://github.com/user-attachments/assets/2395bc78-67fb-4afd-bccc-56b2d21e5bc1">
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/16823782/MRP.zip)
| bug,platform:web,platform:macos,topic:rendering | low | Critical |
2,498,709,180 | vscode | Continuous Crash After Opening More Windows. | I Have both Latest Version of VS Code and Windows 11.
Whenever I open two or more windows in vscode, it continously crashes until i restart my pc. Even after restarting vscode completely clean, it happens more casually nowadays with the Recent Updates.
Look at these error codes:


| info-needed,freeze-slow-crash-leak | low | Critical |
2,498,711,868 | transformers | How to use hugginface for training: google-t5/t5-base | ### Feature request
How to use hugginface for training / ๅฆไฝไฝฟ็จhuggingfaceๆฅ่ฎญ็ป๏ผ
https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation
#What is the format and how do I write it? / ่ฟไธชๆ ผๅผๆฏๆไนๆ ท็๏ผๆไนๅๅข๏ผ
def batch_collator(data):
print(data) #?????????????????????????????????????????????
return {
'pixel_values': torch.stack([x for x in pixel_values]),
'labels': torch.tensor([x for x in labels])
}
trainer = Trainer(
model=model,
args=training_args,
data_collator=batch_collator,//่ฟไธช้่ฆๆไนๅ?
train_dataset=dataset['train'],
)
### Motivation
ๆ
### Your contribution
ๆ
ๆๅทฒ็ป่ฏไบๅฏไปฅ็จ๏ผ https://www.kaggle.com/code/weililong/google-t5-t5-base
ไธ็ฅ้ๆๆฒกๆไปไนๅ | Usage,Feature request | low | Minor |
2,498,712,526 | godot | Enumeration option cannot be selected in exported variable of type enum | ### Tested versions
v4.4.dev1.official [28a72fa43]
### System information
w10 64
### Issue description
Note this code:
Father.gd has only one enumeration "TestEnum"

test.gd inherits from Father.gd
test.gd declares a new enumeration "TestEnum2"

If you initialize the enumeration so that the values โโdo not match the enumeration in Father.gd, the option in the enumeration "Test3" in test.gd will not be selectable in the editor.

Note the comments above each enumeration in "test.gd"
The "Test3" option becomes selectable when you assign the values โโto the test.gd enumeration in the same order as Father.gd has them.
### Steps to reproduce
Try selecting the "Test3" option in the enumeration in the editor
### Minimal reproduction project (MRP)
[testenum.zip](https://github.com/user-attachments/files/16823826/testenum.zip)
| enhancement,discussion,topic:gdscript,topic:editor | low | Major |
2,498,712,639 | PowerToys | Environment Variables Tool split paths wrongly | ### Microsoft PowerToys version
0.83.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Environment Variables
### Steps to reproduce
add `"c:\aaa;bb"` to PATH ,the path is valid, though it contains semicolon
but the Environment Variables Tool will split it to 2 paths:
`"C:\aaa` and `bb"`
### โ๏ธ Expected Behavior
do not split this path
### โ Actual Behavior
`
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,498,714,315 | godot | [3.6.rc1] Existing meshes don't use vertex cache optimization? | ### Tested versions
- Reproducible in 3.6.rc1.
### System information
Windows 11 - Godot v3.6.rc1 - GLES3 - dedicated
### Issue description

In short, i have a ton of meshes saved as .res files. That helps to save scene file sizes, helps to manipulate materials of the surfaces, etc. However, as far as i understand, vertex cache optimization affects only reimported meshes (which i did btw, i removed .import and reimported all assets). In this case, meshes saved in .res files may stay unaffected by this change. There are no methods in `Mesh` nor `ArrayMesh` to optimize the vertex cache with GDScript. So, the question is, how can i apply this optimization method on existing mesh resources? Or is it applied automatically and is not documented to be that way?
### Steps to reproduce
- Have a project that you have developed in 3.5.3.
- Save mesh resources in any format you'd wish (.res in my case)
- Convert to 3.6.rc1.
- Remove .import folder, painfully wait for it to import back because you should not have made that much assets.
- Play the game and notice no difference in framerates in any scenes.
... OR ...
- Open MRP with 3.6.rc1, it was made with version 3.5.3.
### Minimal reproduction project (MRP)
[MeshOptimizationMRP.zip](https://github.com/user-attachments/files/16823835/MeshOptimizationMRP.zip)
| feature proposal,discussion,documentation,topic:import | low | Major |
2,498,716,201 | rust | Tracking peak total storage use | <details><summary><h3>Related August 29th CI event</h3></summary>
On [August 29th, around 3:59 Pacific Daylight Time](https://github.com/rust-lang/rust/pull/129735#issuecomment-2317323833), our CI started to fail due to not having enough storage available. It merged a few PRs, but then about 10 hours later it merged the final PR that it would merge that day: https://github.com/rust-lang/rust/commit/0d634185dfddefe09047881175f35c65d68dcff1
It continued to fail for hours. The spotty CI passes were probably due to GitHub initiating a rollout to their fleet that took 12 hours to reach complete global saturation. At that moment, GitHub reduced the actual levels of storage offered to runners to levels that closely reflect their service agreement. See https://github.com/actions/runner-images/issues/10511 for more on that.
Eventually, I landed https://github.com/rust-lang/rust/pull/129797 which seemed to get CI going again.
</details>
## Do we take up too much space?
We have had our storage usage grow, arguably to concerning levels, over time. Yes, a lot compresses for transfer, but I'm talking about **peak** storage occupancy here. And tarballs are not a format that are conducive to accessing individual files, so in practice, the relevant data occupies hosts in its full, uncompressed glory nonetheless. We also generate quite a lot of build intermediates. Big ones. Some of this is unavoidable, but we should consider investigating ways to reduce storage occupancy of the toolchain and its build intermediates.
Besides, we are having issues keeping our storage usage under the amount available to CI, even if there are other aggravating events. Obviously, clearing CI storage space can be done as a dirty hack to get things running again, but changes that benefit the entire ecosystem are more desirable. However, note that a solution that reduces storage but *significantly increases the number of filesystem accesses*, especially during compiler or tool builds, is likely to make CI problems worse due to this fun little issue:
- https://github.com/rust-lang/rust/issues/127883
I'm opening this issue as a question, effectively: We track how much time the compiler costs, but what about space? Where are we tracking things like e.g. total doc size (possibly divided between libstd doc size and so on)? Are we aware of things like how much space is used by incremental compilation or other intermediates, and how it changes between versions? How about things like e.g. how many crater subjobs run out of space in each beta run? Where would someone find this information? | T-rustdoc,T-compiler,T-bootstrap,T-release,T-libs,C-discussion,A-CI,I-release-nominated | low | Major |
2,498,717,720 | material-ui | button css wrong when i use css varibles | ### Steps to reproduce
the outlined button hover style wrong
https://codesandbox.io/p/sandbox/reverent-babbage-66yztk
```tsx
const theme = createTheme({
cssVariables: {
colorSchemeSelector: ".demo-disable-transition-%s",
},
palette: {
mode: "light",
primary: {
light: "var(--primary-light)",
main: "var(--primary)",
dark: "var(--primary-hover)",
contrastText: "var(--primary-foreground)",
// light: "#338cf5",
// main: "#0070f3",
// dark: "#004eaa",
// contrastText: "#ffffff",
},
},
colorSchemes: { dark: true },
});
```
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: css varibles button hover | package: system,customization: theme | low | Minor |
2,498,720,820 | yt-dlp | Upon error, delay & restart the download a user-defined number X of times with user-defined Y seconds delay between each time. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
My suggestion is prompted by but is also not limited to the current issue:
ERROR: [youtube] ... Sign in to confirm you're not a bot. This helps protect our community.
- I believe this solution will solve this specific youtube error for many - especially when implimented proactively before the issue develops for a current IP address.
- Even so, this feature would also be useful as a general feature, not simply for youtube and not simply in response to that single error.
Allow me to provide the initial backstory. Firstly as an important side note, my IP is not currently or generally blocked by youtube; but when too many requests are made in too short a period of time, youtube gives me this error. However if I wait long enough before attempting to request more information, the error goes away. So, I suppose that I am temporaily blocked at such times, but the block is very brief, a matter of minutes. Yet if I were to continue to send requests despite the temporary block, I expect that the block would be extended or even become permanent.
In my case, I have a batch file which pulls multiple instances of yt-dlp sequentially; and using yt-dlp commands, I have set delays between each download (and each subtitle download) so as to attempt to avoid an IP block. Even so (and especially when pulling page requests for users with many videos), I occasionally get this error. Since there then is no actual download taking place, there is then also (& despite my set delays) no delay before the next request - resulting in many requests in rapid succession. I fear that this puts me at risk of a more substantial block.
Thankfully often times, I leave the command window visible while I am performing other tasks on my computer. So, I often enough see the error when it is occurring. Upon seeing it, I then pause the script by literally using the "Pause" button on my keyboard. After a while, I then press any key to unpause the script; and now, it runs without the error. I essentially waitout the temporary ban.
So, I am requesting 1 feature with 2 aspects. Repeating my feature request title, I would like to have the following:
Upon error, delay & restart the download a user-defined number X of times with user-defined Y seconds delay between each time.
To flesh that out a bit more, I want yt-dlp to take an action upon seeing such an error. For that action, I want it to pause a user-defined number of seconds Y before attempting the download again. After retrying the download a user-defined number of times X, I want it to then skip the file.
Finally, I could easily see a use-case scenario where the script pauses indefinitely after the set number of attempts. At such a point, a user input would be requested as to whether to then skip or retry. This would be, for example, useful if an error was actually due to a loss of internet connection or if a site temporarily went down. Thusly, it would be able to be easily restarted manually by the user at the same point and after any connection issues were resolved. While not needed for my use-case scenario, this is merely suggested here as a proactive bit of coding in the same section.
So again, there are 2 reasons for my feature request:
(1) It will help prevent users from being blocked on sites such as youtube.
(2) It will allow temporarily blocked downloading to subsequently continue from the same point (not restarting an entire batch sequence) automatically or with manual users.
Recommended syntax:
--error-retries RETRIES
Number of retries for download upon an error
(default is 3), or "infinite" (DASH, hlsnative and ISM)
--error-sleep-interval SECONDS
Number of seconds to sleep
before each retry upon an error
when used alone or a lower bound of a range
for randomized sleep before each retry
(minimum possible number of seconds to sleep)
when used along with --error-max-sleep-interval
--error-max-sleep-interval SECONDS
Upper bound of a range for randomized sleep
before each retry upon an error
(maximum possible number of seconds to sleep).
Must only be used along with --error-sleep-interval
--pause-on-error
Pause downloading of further videos
(in the playlist or the command line)
if an error occurs.
This happens only after set number of attempted retries.
see also: --error-retries
--abort-on-error
Syntax to be considered and/or adjusted:
--abort-on-error
Abort downloading of further videos
(in the playlist or the command line)
if an error occurs.
Lines to add to description:
This happens only after set number of attempted retries.
see also: --error-retries
--pause-on-error
Reasoning: Currently, --abort-on-error does not consider any --error-retries but simply & immediately aborts. This should then consider retries first, if any are defined. Otherwise, this would behave as usual.
Naturally, --abort-on-error and --pause-on-error would be mutually exclusive.
Coincidental side notes:
Currently, there is an error on:
https://github.com/ytdl-org/youtube-dl?tab=readme-ov-file#options
The desciption for
--max-sleep-interval
refers to
--min-sleep-interval
which does not actually exist.
Properly, this simply should refer to
--sleep-interval
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement,triage | low | Critical |
2,498,732,397 | PowerToys | Open Color Picker Editor key combination | ### Description of the new feature / enhancement
It would be ideal if the color picker could offer the option to select the editor or only color, depending on whether I hold and click a key combination, such as Ctrl. This would allow me to open the editor when i need it without having to adjust it in the settings. So i always just select the color, for example, without opening the editor and if necessary with Ctrl
### Scenario when this would be used?
Sometimes when making some textures in GIMP, i just want to pick a color from an image from the browser for example. But sometimes id like to see what the PT Color Picker suggests for similar colors
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,498,750,529 | ui | [bug]: Unexpected token when using shadcn add | ### Describe the bug
I get the following error when I try to call `npx shadcn add seperator`.
Error:
```
โ Checking registry.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Unexpected token '<', "<!DOCTYPE "... is not valid JSON
```
It is a project that used shadcn-ui.
I already did the update in the `components.json`
I use nextjs 14
### Affected component/components
seperator
### How to reproduce
1. Open an existing nextjs 14 (app router) project that contains shadcn-ui
2. update your components.json file (aliases)
3. Try to add the seperator with `npx shadcn add seperator`
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
> npx shadcn add seperator
โ Checking registry.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Unexpected token '<', "<!DOCTYPE "... is not valid JSON
```
### System Info
```bash
MacOS
Next.js 14
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,498,751,377 | godot | Code region folding is not remembered for some scripts | ### Tested versions
4.3
4.4 dev 61598c5
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description
When you fold regions in some script, they will be unfolded in editor restart. Not sure what causes it exactly. It only applies to some scripts.
### Steps to reproduce
1. Open the attached script
2. Fold Quests region
3. Save
4. Restart editor
### Minimal reproduction project (MRP)
[regionbug.gd.txt](https://github.com/user-attachments/files/16824123/regionbug.gd.txt)
| bug,topic:editor,usability | low | Critical |
2,498,760,273 | ui | [bug]: Blocks themes switch failed in Docmentation | ### Describe the bug
[In the Documentation](https://ui.shadcn.com/blocks#charts-01), blocks theme color change failed after clicking the theme switcher.
<img width="1000" alt="image" src="https://github.com/user-attachments/assets/bf26a528-d0b7-4a60-9cfe-769cedc71832">
change
### Affected component/components
Documentation/Blocks
### How to reproduce
1. Go to https://ui.shadcn.com/blocks#charts-01.
2. Click the theme switcher.
3. Blocks theme colors don't change.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Apple M1 pro
macOS 14.5
Google Chrome 128.0.6613.85
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,498,760,655 | flutter | Unexpected arrow emoji resolution on certain Android devices. | ### Steps to reproduce
- In android platform,input different arrow character can lead to wrong rendering effects.
- for example:
โโโโโ
โฌ
โก
- Github can show correct effect.
### Expected results
- expected results:
โโโโโ
โฌ
โก
### Actual results
- But in Flutter,it shows:
- Fllowing is my app.

- Fllowing is Xianyu(Chinese PoshmarkInc)

### Code sample
- null
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
[โ] Flutter (Channel stable, 3.22.1, on Microsoft Windows [็ๆฌ 10.0.22631.4112], locale zh-CN)
โข Flutter version 3.22.1 on channel stable at D:\flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision a14f74ff3a (3 months ago), 2024-05-22 11:08:21 -0500
โข Engine revision 55eae6864b
โข Dart version 3.4.1
โข DevTools version 2.34.3
โข Pub download mirror https://pub.flutter-io.cn
โข Flutter download mirror https://storage.flutter-io.cn
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at C:\Users\wu_mi\AppData\Local\Android\sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: C:\\Program Files\\Android\\Android Studio\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
โข All Android licenses accepted.
| platform-android,framework,engine,c: rendering,has reproducible steps,P3,team-engine,triaged-engine,found in release: 3.24,found in release: 3.25 | medium | Major |
2,498,778,167 | ui | [bug]: New Shadcn CLI - found two bugs while testing | ### Describe the bug
Thank you for providing the new CLI. Currently testing and like it a lot. Found two bugs so far. This is my base tailwind.config.ts file:
```
import type { Config } from "tailwindcss";
const config: Config = {
darkMode: ["class"],
content: [
"./src/pages/**/*.{js,ts,jsx,tsx,mdx}",
"./src/components/**/*.{js,ts,jsx,tsx,mdx}",
"./src/app/**/*.{js,ts,jsx,tsx,mdx}",
],
theme: {
extend: {
backgroundImage: {
'gradient-radial': 'radial-gradient(var(--tw-gradient-stops))',
'gradient-conic': 'conic-gradient(from 180deg at 50% 50%, var(--tw-gradient-stops))'
},
borderRadius: {
lg: 'var(--radius)',
md: 'calc(var(--radius) - 2px)',
sm: 'calc(var(--radius) - 4px)'
},
colors: {
background: 'hsl(var(--background))',
foreground: 'hsl(var(--foreground))',
card: {
DEFAULT: 'hsl(var(--card))',
foreground: 'hsl(var(--card-foreground))'
},
popover: {
DEFAULT: 'hsl(var(--popover))',
foreground: 'hsl(var(--popover-foreground))'
},
primary: {
DEFAULT: 'hsl(var(--primary))',
foreground: 'hsl(var(--primary-foreground))'
},
secondary: {
DEFAULT: 'hsl(var(--secondary))',
foreground: 'hsl(var(--secondary-foreground))'
},
muted: {
DEFAULT: 'hsl(var(--muted))',
foreground: 'hsl(var(--muted-foreground))'
},
accent: {
DEFAULT: 'hsl(var(--accent))',
foreground: 'hsl(var(--accent-foreground))'
},
destructive: {
DEFAULT: 'hsl(var(--destructive))',
foreground: 'hsl(var(--destructive-foreground))'
},
border: 'hsl(var(--border))',
input: 'hsl(var(--input))',
ring: 'hsl(var(--ring))',
chart: {
'1': 'hsl(var(--chart-1))',
'2': 'hsl(var(--chart-2))',
'3': 'hsl(var(--chart-3))',
'4': 'hsl(var(--chart-4))',
'5': 'hsl(var(--chart-5))'
}
}
}
},
plugins: [require("tailwindcss-animate")],
};
export default config;
```
when I add this to the extend section:
```
fontFamily: {
sans: ['var(--font-geist-sans)'],
mono: ['var(--font-geist-mono)'],
},
```
the CLI crashes when trying to add an accordion using:
`npx shadcn@latest add accordion`
Also when I have this under the theme section:
```
container: {
center: true,
padding: '2rem',
screens: {
'2xl': '1400px',
},
},
```
and I try to add the accordion again, I noticed that the CLI changes:
`center: true` to `center: "true"`
### Affected component/components
e.g. accordion
### How to reproduce
See in description
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Nextjs 14
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,498,781,013 | godot | SpringArm(x)D should also update children's transform at editor time | ### Tested versions
v4.4.dev
### System information
Godot v4.4.dev (a5830f6eb) - Windows 10.0.22621 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3080 (NVIDIA; 32.0.15.5585) - 12th Gen Intel(R) Core(TM) i9-12900K (24 Threads)
### Issue description
Currently, the children of SpringArm3D component, only update their transform at runtime. And if I modified the properties (spring_arm_length...) of SpringArm3D, the children(for example a camera node), which will be inconsistent about the transform from editor preview to real play in game.
### Steps to reproduce

### Minimal reproduction project (MRP)
-- | enhancement,feature proposal,discussion,topic:editor,topic:physics | low | Minor |
2,498,783,243 | godot | EditorDebuggerRemoteObject in Inspector shows wrong object id | ### Tested versions
- Reproducible in v4.4.dev1.official.28a72fa43
### System information
Windows 11, Vulkan API 1.3.205 - Forward+ - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 3060 Ti
### Issue description
When debugging, the stack object's id is wrong.
As below, the id in the Inspector should be `9223372080660088240`, not `-9223371993049463376`
<img width="1489" alt="ๅฑๅนๆชๅพ 2024-08-31 181717" src="https://github.com/user-attachments/assets/2d3e1af4-e574-4494-ae38-c575fc097c54">
I notice there is a difference between two files:
https://github.com/godotengine/godot/blob/61598c5c88d95b96811d386cb20d714c35f4c6d7/editor/debugger/editor_debugger_inspector.cpp#L69-L75
https://github.com/godotengine/godot/blob/61598c5c88d95b96811d386cb20d714c35f4c6d7/editor/editor_properties.cpp#L1331-L1349
I guess it should be modified to:
```
String EditorDebuggerRemoteObject::get_title() {
if (remote_object_id.is_valid()) {
//return vformat(TTR("Remote %s:"), String(type_name)) + " " + itos(remote_object_id);
return vformat(TTR("Remote %s:"), String(type_name)) + " " + uitos(remote_object_id);
} else {
return "<null>";
}
}
```
Furthermore, this method `get_remote_object_id` also give a wrong id (eg. `-9223371993049463376`) in gdscript:https://github.com/godotengine/godot/blob/61598c5c88d95b96811d386cb20d714c35f4c6d7/editor/debugger/editor_debugger_inspector.h#L51
### Steps to reproduce
NA
### Minimal reproduction project (MRP)
NA | topic:editor,needs testing | low | Critical |
2,498,790,559 | flutter | Custom font special characters not rendered correctly on Web with CanvasKit | ### Steps to reproduce
1. Clone the shadcn_ui repo https://github.com/nank1ro/flutter-shadcn-ui
2. Switch to the branch `feat/context-menu`
3. `cd example`
4. `flutter run -d chrome --web-renderer=canvaskit`
5. After the app launched, click on the "ContextMenu" list tile.
6. See error
If you relaunch the app with the html renderer, everything works as expected, like the other platforms.
The font family used is Geist. The error happens only with canvaskit, in other platforms and with the html renderer everything works as expected
See screenshots below.
Maybe it's related to https://github.com/flutter/flutter/issues/96222, https://github.com/flutter/flutter/issues/90452 and https://github.com/flutter/flutter/issues/56319
### Expected results
The font should work even when using the canvaskit renderer.
### Actual results
The font doesn't show special characters, or they load very slowly. Even when loaded, the characters are not aligned with the others. Like the `โ`
### Code sample
<details open><summary>Code sample</summary>
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
With HTML renderer it works correctly
<img width="430" alt="Screenshot 2024-08-31 at 12 45 50" src="https://github.com/user-attachments/assets/18317319-f8f4-4f4f-aaf0-d094e2d50817">
With CanvasKit the error happens
<img width="414" alt="Screenshot 2024-08-31 at 12 45 08" src="https://github.com/user-attachments/assets/71c92a99-0a09-458d-98c8-bc50f3f69cad">
</details>
### Logs
<details open><summary>Logs</summary>
https://pastebin.com/5hhVPPqA
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.1, on macOS 14.5 23F79 darwin-arm64, locale en-IT)
โข Flutter version 3.24.1 on channel stable at /Users/ale/fvm/versions/stable
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 5874a72aa4 (11 days ago), 2024-08-20 16:46:00 -0500
โข Engine revision c9b9d5780d
โข Dart version 3.5.1
โข DevTools version 2.37.2
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/ale/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2023.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[โ] VS Code (version 1.92.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.94.0
[โ] Connected device (4 available)
โข iPhone di Ale (mobile) โข 00008030-000E39E01EEB802E โข ios โข iOS 17.6.1 21G93
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.5 23F79 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.5 23F79 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 128.0.6613.113
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| engine,platform-web,e: web_canvaskit,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.25 | low | Critical |
2,498,798,111 | godot | Disabling `restore_scripts_on_load` will skip loading whole script editor layout | ### Tested versions
4.3 and older
### System information
Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads)
### Issue description
Script editor has this condition at the beginning:
https://github.com/godotengine/godot/blob/61598c5c88d95b96811d386cb20d714c35f4c6d7/editor/plugins/script_editor_plugin.cpp#L3371-L3373
While it's supposed to stop scripts from being load at launch, it prevents restoring any layout property, like split offset or zoom (they are further down in the same function, e.g. https://github.com/godotengine/godot/blob/61598c5c88d95b96811d386cb20d714c35f4c6d7/editor/plugins/script_editor_plugin.cpp#L3447-L3449)
Settings unrelated to scripts themselves should be likely moved above the first condition.
### Steps to reproduce
1. Disable `text_editor/behavior/files/restore_scripts_on_load` editor setting
2. Open script editor
3. Change size of script list
4. Reload editor
5. Notice list size is reset
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,498,800,711 | go | proposal: cmd/vet: warn about copying a time.Timer value | The following code works fine on Go 1.22, but breaks on Go 1.23.0 on the `http.DefaultClient.Do()` line. On Linux, it freezes the entire runtime and never continues. On macOS, it throws a `fatal error: ts set in timer`
```go
package main
import (
"fmt"
"net/http"
"time"
)
func main() {
illegalTimerCopy := *time.NewTimer(10 * time.Second)
go func() {
illegalTimerCopy.Reset(10 * time.Second)
}()
req, err := http.NewRequest(http.MethodGet, "https://example.com", nil)
fmt.Println("Request created", err)
_, err = http.DefaultClient.Do(req)
fmt.Println("Request completed", err)
}
```
It's presumably somehow caused by the [Go 1.23 timer changes](https://go.dev/wiki/Go123Timer), but I'm not sure how exactly, so I don't know if it's a bug or a feature. Assuming it's not a bug, it would be nice if `go vet` could report that similar to the mutex copy warnings. | Proposal | medium | Critical |
2,498,801,478 | godot | Bistro demo on D3D12 triggers TDR quite easily (Intel UHD 620) | ### Tested versions
- Reproducible on v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - dedicated Intel(R) UHD Graphics 620 (Intel Corporation; 31.0.101.2128) - Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz (8 Threads)
### Issue description
When opening or running https://github.com/Jamsers/Bistro-Demo-Tweaked (commit 6b0ee6c) using D3D12, Godot very often triggers a TDR:
- Functions start returning `0x887a0005` (which is `DXGI_ERROR_DEVICE_REMOVED`)
- When GPU validation is enabled, it produces:
```
D3D12 ERROR: ID3D12Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DEVICE_HUNG: The Device took an unreasonable amount of time to execute its commands, or the hardware crashed/hung. As a result, the TDR (Timeout Detection and Recovery) mechanism has been triggered. The current Device Context was executing commands when the hang occurred. The application may want to respawn and fallback to less aggressive use of the display hardware). [ EXECUTION ERROR #232: DEVICE_REMOVAL_PROCESS_AT_FAULT]
```
This can happen very easily even when setting to windowed mode at 1280x720.
In comparison, the same project seems to run fine on the same system without TDR when using Vulkan, at fullscreen 2560x1440 resolution (albeit at a mediocre 2 fps).
### Steps to reproduce
- Use Intel integrated graphics?
- Open the project in the editor once to reimport resources
- Run the project with `--rendering-driver d3d12`
- Observe errors being printed to the console
### Minimal reproduction project (MRP)
n/a | bug,platform:windows,topic:rendering,crash | low | Critical |
2,498,817,253 | ui | [bug]: New Shadcn CLI for VITE React Projects - No Tailwind CSS configuration found | ### Describe the bug
When following the vite project setup guide you will get an error when you get to the step where you run the `pnpm dlx shadcn@latest init` command. This is due to the index.css missing the tailwind css headers. The documentation needs to be updated at step 2 to.
I've created a pull request with the fix here https://github.com/shadcn-ui/ui/pull/4676
### Affected component/components
None
### How to reproduce
1. Follow Documentation for setting up vite react project.
2. Get to step 6 and run the command to get the error.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
PS C:\Users\Dwight\Documents\Develop\Typescript\React\pet-form-lhs> npx shadcn@latest init
โ Preflight checks.
โ Verifying framework. Found Vite.
โ Validating Tailwind CSS.
โ Validating import alias.
No Tailwind CSS configuration found at C:\Users\Dwight\Documents\Develop\Typescript\React\pet-form-lhs.
It is likely you do not have Tailwind CSS installed or have an invalid configuration.
Install Tailwind CSS then try again.
Visit https://tailwindcss.com/docs/guides/vite to get started.
```
### System Info
```bash
Windows 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | high | Critical |
2,498,826,288 | vscode | Git - support for SHA256 repository | Type: <b>Bug</b>
Currently have following issues with SHA256 repository:
1. Each commit shown as if it is separate branch
2. "Copy Commit ID" copy only first 40 hex numbers of commit ID
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Windows_NT x64 10.0.22631
Modes:
Extensions disabled
<!-- generated by issue reporter --> | feature-request,scm | low | Critical |
2,498,837,187 | PowerToys | Minimize on start | ### Description of the new feature / enhancement
Can you add a feature where power toys can be started with Windows but minimized? On my work laptop it always started full screen covering everything behind it and I don't have access to do updates or access any other way to minimize it so if you could add an option in the preference to hide it in the notification area or minimize it on start that would be helpful since it is extremely annoying at the moment.
### Scenario when this would be used?
Whenever you start your computer and Windows loads.
### Supporting information
I work for the government so my computer is locked down and I'm not an administrative user. | Needs-Triage,Needs-Team-Response | low | Major |
2,498,844,353 | rust | `cargo build` and `cargo test` always expire each other build cache | # bug
very common scenario let build cache expired
## version
1.80.0
## edit
- was able to always reproduce this, before I `cargo clean`
- after `cargo clean`, cannot reproduce this again
## actual
N th build is expected.
```sh
$ cargo build --profile xxx && yarn workspaces foreach --all run postbuild
...
255.36s user 15.29s system 110% cpu 4:04.42 total
```
N+1 th build are also expected.
```sh
$ cargo build --profile xxx && yarn workspaces foreach --all run postbuild
...
15.90s user 1.18s system 100% cpu 16.910 total
```
```sh
$ cargo build --profile xxx && yarn workspaces foreach --all run postbuild
...
15.27s user 1.11s system 104% cpu 15.685 total
```
```sh
$ cargo build --profile xxx && yarn workspaces foreach --all run postbuild
...
15.65s user 1.07s system 104% cpu 16.081 total
```
**BUT unexpectedly**, `cargo test` seems expire all cache.
```sh
$ cargo test --profile xxx
...
391.13s user 24.45s system 114% cpu 6:04.12 total
```
Below `cargo test` are expected.
```sh
$ cargo test --profile xxx
...
0.66s user 0.39s system 16% cpu 6.182 total
```
```sh
$ cargo test --profile xxx
...
0.45s user 0.23s system 122% cpu 0.555 total
```
**Unexpectedly**, `cargo build` expire `cargo test` cache.
```sh
$ cargo build --profile xxx && yarn workspaces foreach --all run postbuild
...
247.31s user 15.54s system 110% cpu 3:58.70 total
```
```sh
$ cargo build --profile xxx && yarn workspaces foreach --all run postbuild
...
15.57s user 1.04s system 103% cpu 16.054 total
```
## expected
- `cargo test` should not expire `cargo build` cache.
- `cargo build` should not expire `cargo test` cache.
- should have rust docs on how to improve compile speed beside `sccache`
## additional
I always see compiling **all** the dep tree again and again and again and again and again.
- Are there any solution or workaround to pre compile all the libraries?
| C-discussion | low | Critical |
2,498,858,895 | ui | [bug]: Could not install Dropdown menu | ### Describe the bug
Could not install component when using `@latest`

Works fine when using `@2.0.0`

### Affected component/components
Dropdown Menu
### How to reproduce
1. Run `npx shadcn@latest add dropdown-menu`
### Codesandbox/StackBlitz link
N/A
### Logs
_No response_
### System Info
```bash
Win 10
Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,498,859,361 | PowerToys | Keyboard Manager Doesn't Work | ### Microsoft PowerToys version
0.83.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Remap a key
### โ๏ธ Expected Behavior
Key to be remapped
### โ Actual Behavior
Key is not remapped, instead just does the original default action.
### Other Software
Windows 11 Pro | Issue-Bug,Needs-Triage | low | Minor |
2,498,896,629 | tauri | [bug] error: failed to run custom build command for `windows_x86_64_msvc v0.48.5` | ### Describe the bug
# version
```
rustup 1.27.1 (54dd3d00f 2024-04-24)
info: This is the version for the rustup toolchain manager, not the rustc compiler.
info: The currently active `rustc` version is `rustc 1.80.1 (3f5fd8dd4 2024-08-06)`
win11
```
# command
```
pnpm tauri dev
```
# desc
error: failed to run custom build command for `windows_x86_64_msvc v0.48.5`
Caused by:
could not execute process `D:\workspaces\rust\app-1\src-tauri\target\debug\build\windows_x86_64_msvc-73c75ea0da9a783c\build-script-build` (never executed)
Caused by:
ๆ็ป่ฎฟ้ฎใ (os error 5)
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `serde_json v1.0.127`
Caused by:
could not execute process `D:\workspaces\rust\app-1\src-tauri\target\debug\build\serde_json-684725161e9bb153\build-script-build` (never executed)
Caused by:
ๆ็ป่ฎฟ้ฎใ (os error 5)

### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
> app-1@0.1.0 tauri D:\workspaces\rust\app-1
> tauri "dev"
Running BeforeDevCommand (`pnpm dev`)
> app-1@0.1.0 dev D:\workspaces\rust\app-1
> vite
VITE v5.4.2 ready in 552 ms
โ Local: http://localhost:1420/
Info Watching D:\workspaces\rust\app-1\src-tauri for changes...
Compiling serde_derive v1.0.209
Compiling zerocopy-derive v0.7.35
Compiling thiserror-impl v1.0.63
Compiling cssparser v0.27.2
Compiling derive_more v0.99.18
Compiling cssparser-macros v0.6.1
Compiling html5ever v0.26.0
Compiling darling_core v0.20.10
Compiling windows_x86_64_msvc v0.48.5
Compiling regex-automata v0.4.7
Compiling serde_json v1.0.127
Compiling serde_derive_internals v0.29.1
Compiling brotli v6.0.0
Compiling ctor v0.2.8
Compiling glob v0.3.1
Compiling libc v0.2.158
error: failed to run custom build command for `windows_x86_64_msvc v0.48.5`
Caused by:
could not execute process `D:\workspaces\rust\app-1\src-tauri\target\debug\build\windows_x86_64_msvc-73c75ea0da9a783c\build-script-build` (never executed)
Caused by:
ๆ็ป่ฎฟ้ฎใ (os error 5)
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `serde_json v1.0.127`
Caused by:
could not execute process `D:\workspaces\rust\app-1\src-tauri\target\debug\build\serde_json-684725161e9bb153\build-script-build` (never executed)
Caused by:
ๆ็ป่ฎฟ้ฎใ (os error 5)
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: Windows,status: needs triage | low | Critical |
2,498,897,579 | create-react-app | eslint config should enable "react/jsx-key" | The eslint config from this repo is officially recommended in the [react docs](https://react.dev/learn/editor-setup#linting).
I'm surprised that various rules from eslint-plugin-react [recommended config](https://github.com/jsx-eslint/eslint-plugin-react/blob/master/index.js#L41C7-L64C9) are not enabled here.
For example, [react/jsx-key](https://github.com/jsx-eslint/eslint-plugin-react/blob/master/docs/rules/jsx-key.md) is important for a) decreasing the risk of bugs and b) getting rid of noise in console logs (which can swamp more important errors/warnings).
I'm wondering if there's a good reason for this?
If not, would be happy to submit a PR.
| issue: proposal,needs triage | low | Critical |
2,498,904,341 | deno | Deno Jupyter kernel fails to execute code (Docker) | I was able to install the Deno Jupyter kernel successfully, however unable to execute any code inside the Docker container
My Dockerfile:
```Dockerfile
FROM jupyter/base-notebook:latest
# Switch to root to install additional dependencies
USER root
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
unzip \
build-essential \
curl \
git \
jq \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Deno
RUN curl -fsSL https://deno.land/x/install/install.sh | sh && \
mv /home/jovyan/.deno/bin/deno /usr/local/bin/deno
# Install Deno Jupyter kernel
RUN deno jupyter --install
# Test
COPY template/test.ipynb /home/user/test.ipynb
RUN jupyter execute /home/user/test.ipynb
```
Test:
```
const x = 123;
x
```
```json
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\u001b[33m123\u001b[39m"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"const x = 123;\n",
"x"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Deno",
"language": "typescript",
"name": "deno"
},
"language_info": {
"codemirror_mode": "typescript",
"file_extension": ".ts",
"mimetype": "text/x.typescript",
"name": "typescript",
"nbconvert_exporter": "script",
"pygments_lexer": "typescript",
"version": "5.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
```
Output:
```
[+] Building 11.8s (10/10) FINISHED docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 646B 0.0s
=> [internal] load metadata for docker.io/jupyter/base-notebook:latest 0.5s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/6] FROM docker.io/jupyter/base-notebook:latest@sha256:8c903974902b0e9d45d9823c2234411de0614c5c98c4bb782b3d4f55b3e435e6 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 69B 0.0s
=> CACHED [2/6] RUN apt-get update && apt-get install -y --no-install-recommends unzip build-essential curl git jq && apt-get clean 0.0s
=> CACHED [3/6] RUN curl -fsSL https://deno.land/x/install/install.sh | sh && mv /home/jovyan/.deno/bin/deno /usr/local/bin/deno 0.0s
=> CACHED [4/6] RUN deno jupyter --install 0.0s
=> CACHED [5/6] COPY template/test.ipynb /home/user/test.ipynb 0.0s
=> ERROR [6/6] RUN jupyter execute /home/user/test.ipynb 11.3s
------
> [6/6] RUN jupyter execute /home/user/test.ipynb:
0.357 [NbClientApp] Executing /home/user/test.ipynb
10.54 Warning "deno jupyter" is unstable and might change in the future.
11.06 [NbClientApp] Executing notebook with kernel: deno
11.18 Traceback (most recent call last):
11.18 File "/opt/conda/bin/jupyter-execute", line 10, in <module>
11.18 sys.exit(main())
11.18 ^^^^^^
11.18 File "/opt/conda/lib/python3.11/site-packages/jupyter_core/application.py", line 280, in launch_instance
11.18 super().launch_instance(argv=argv, **kwargs)
11.18 File "/opt/conda/lib/python3.11/site-packages/traitlets/config/application.py", line 1052, in launch_instance
11.18 app.initialize(argv)
11.18 File "/opt/conda/lib/python3.11/site-packages/traitlets/config/application.py", line 117, in inner
11.18 return method(app, *args, **kwargs)
11.18 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/cli.py", line 114, in initialize
11.18 [self.run_notebook(path) for path in self.notebooks]
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/cli.py", line 114, in <listcomp>
11.18 [self.run_notebook(path) for path in self.notebooks]
11.18 ^^^^^^^^^^^^^^^^^^^^^^^
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/cli.py", line 157, in run_notebook
11.18 client.execute()
11.18 File "/opt/conda/lib/python3.11/site-packages/jupyter_core/utils/__init__.py", line 173, in wrapped
11.18 return loop.run_until_complete(inner)
11.18 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
11.18 File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
11.18 return future.result()
11.18 ^^^^^^^^^^^^^^^
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/client.py", line 705, in async_execute
11.18 await self.async_execute_cell(
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/client.py", line 1001, in async_execute_cell
11.18 exec_reply = await self.task_poll_for_reply
11.18 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/client.py", line 783, in _async_poll_for_reply
11.18 await asyncio.wait_for(task_poll_output_msg, self.iopub_timeout)
11.18 File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 489, in wait_for
11.18 return fut.result()
11.18 ^^^^^^^^^^^^
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/client.py", line 813, in _async_poll_output_msg
11.18 self.process_message(msg, cell, cell_index)
11.18 File "/opt/conda/lib/python3.11/site-packages/nbclient/client.py", line 1099, in process_message
11.18 display_id = content.get('transient', {}).get('display_id', None)
11.18 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
11.18 AttributeError: 'NoneType' object has no attribute 'get'
------
Dockerfile:24
--------------------
22 | # Smell test
23 | COPY template/test.ipynb /home/user/test.ipynb
24 | >>> RUN jupyter execute /home/user/test.ipynb
25 |
--------------------
ERROR: failed to solve: process "/bin/bash -o pipefail -c jupyter execute /home/user/test.ipynb" did not complete successfully: exit code: 1
```
Any suggestions? | bug,deno jupyter | low | Critical |
2,498,908,659 | ui | [feat]: h3 tag empty in card component | ### Feature description
Please fix this::-
prev:- <h3
ref={ref}
className={cn(
"text-2xl font-semibold leading-none tracking-tight",
className
)}
{...props}
/>
changed:-
<h3
ref={ref}
className={cn(
"text-2xl font-semibold leading-none tracking-tight",
className
)}
{...props}
> {props.children || "Default Heading Text"}</h3>
### Affected component/components
card
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,498,915,628 | go | testing: timeouts prevent deadlock detection from working | ### Go version
go version go1.23.0 linux/amd64
### Output of `go env` in your module/workspace:
<details><summary>Click to expand</summary>
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='~/.cache/go-build'
GOENV='~/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='~/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='~/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/lib/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/lib/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='on'
GOTELEMETRYDIR='~/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='0'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3043789877=/tmp/go-build -gno-record-gcc-switches'
```
</details>
### What did you do?
```go
package main
func main() {
select {}
}
package main_test
import "testing"
func TestDeadlock(t *testing.T) {
select {}
}
```
### What did you see happen?
```bash
$ go run main.go
fatal error: all goroutines are asleep - deadlock!
...
$ go test -timeout 0
fatal error: all goroutines are asleep - deadlock!
...
$ go test -timeout 1s # note that the default is 10m
panic: test timed out after 1s
running tests:
TestDeadlock (1s)
...
```
### What did you expect to see?
```diff
$ go run main.go
fatal error: all goroutines are asleep - deadlock!
...
$ go test -timeout 0
fatal error: all goroutines are asleep - deadlock!
...
$ go test -timeout 1s # note that the default is 10m
-panic: test timed out after 1s
- running tests:
- TestDeadlock (1s)
+fatal error: all goroutines are asleep - deadlock!
...
``` | NeedsInvestigation | low | Critical |
2,498,924,706 | PowerToys | Peek preview area flickers when jumping to next/previous file | ### Microsoft PowerToys version
0.83.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Peek
### Steps to reproduce
Jumping from one file to the next using he arrow keys with Peek open, the preview area flickers every single time it previews the next file.
This might be considered a minor issue, but it looks quite bad and it reflect badly on the polish of the app.
### โ๏ธ Expected Behavior
No flickering when switching between previewed files with the arrow keys
### โ Actual Behavior
The preview area is cleared for a milisec before displaying the next file, making it flicker, especially if you quickly preview a bunch of files after one another.
### Other Software
_No response_ | Issue-Bug,Status-In progress,Product-Peek | low | Minor |
2,498,926,389 | ui | [feat]: cli base colors - use schema instead of hardcoded values | ### Feature description
Currently, the getRegistryBaseColors function, and the build-registry.ts script use a hardcoded list of base colors. This approach means we cant add completely custom base colors to a custom registry.
Could we fetch the base colors from something like colors/index.json file in the registry, or a new file like colors/base.json?
Im not sure if you intend all colors to be baseColors, if so maybe a property on the schema to identify a color as base could be used?
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,498,978,025 | tensorflow | tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel raises a program abort | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When setting the `num_bits` in a large integer, this API raises the program abort.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
inputs = tf.constant(0.57681304)
min = tf.constant(2.1311088)
max = tf.constant(2.4402196)
num_bits = 10
narrow_range = False
tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel(inputs=inputs,min=min,max=max,num_bits=num_bits,narrow_range=narrow_range)
```
```
### Relevant log output
```shell
F tensorflow/core/framework/tensor_shape.cc:356] Check failed: d >= 0 (0 vs. -1)
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,498,984,056 | godot | Type 'Noise' does not provide any constructors in C# | ### Tested versions
Reproducible in Godot 4.3 stable dotnet and 4.2.2 stable dotnet, when using C#
### System information
Windows 11
### Issue description
From my limited experimentation, I assume that this is because the inbuilt Noise class is abstract. Whilst this isn't an issue with GDScript, it prevents C# scripts inheriting from this class, as the language - as far as I know - requires classes to have a constructor, and should provide one by default - although that appears not to be the case here, because its not an actual C# class, but a wrapper of some sort. IDK, I'm not really familiar with the inner workings of Godot.


### Steps to reproduce
Create a C# class that inherits from Noise, and try to build the project.
### Minimal reproduction project (MRP)
N/A | bug,topic:dotnet | low | Minor |
2,498,991,817 | material-ui | [base-ui][material-ui][Autocomplete] Better support for uncontrolled autocomplete in HTML form submissions | ### Summary
Autocomplete components should work in a regular html form submission, by accepting a `name` prop and adding `<input type="hidden" name={nameProp} value={value}/>` elements, when the `value` prop is not being controlled.
If the `multiple` prop is `true`, then a list of hidden input elements should be added using field name array syntax (i.e. ``name={`${nameProp}[]`})``)
Docs should note that when using in uncontrolled mode, the `name` prop should be added to the autocomplete component, and not manually added to the element returned by `renderInput` (usually a TextField), to prevent duplicate entries in the form submission.
Ideally, there ought to be a mechanism to specify what the `value` of the hidden input should be, based on the selected option(s). For example in an autocomplete I may want to show a list of movie names, concatenated with the year. When the user selects one, I likely want to send the movie `id` in the form submission, not the option label.
Currently, I can create the hidden inputs manually fairly easily if it is a multi select, by adding these props:
```tsx
multiple
renderTags={(selectedOptions, getTagProps) =>
selectedOptions.map((value, index) => (
<React.Fragment key={value.id}>
<Chip {...getTagProps({ index })} label={`${value.title}, ${value.year}`} />
<input type="hidden" name={`movie_id[]`} value={value.id} />
</React.Fragment>
)
)
}
```
Or if it is a single select, and the value I want as part of the form submission is the same as the option label, I can just add the `name` prop to the TextField in `renderInput` and I don't need a hidden input:
```tsx
renderInput={(params) => (
<TextField
{...params}
name="movie"
label="Movie"
/>
)}
```
However, if it's a single select and the value I want as part of the form submission is not equal to the option label, I need to add the hidden input field to the `renderInput` prop, and ensure the field name is attached to the hidden input, not the TextField input. Since the selected `option` is not available in the `params` arg of `renderInput`, the only thing I can do is a reverse lookup to find the `id` based on the option label (and hope there are not two movies with the same title and year in my dataset).
```tsx
renderInput={(params) => (
<>
<TextField
{...params}
label="movie"
/>
<input
type="hidden"
name="movie_id"
value={getMovieId(params.inputProps.value) ?? ""}
/>
</>
)}
```
This is the only case where as far as I can tell with the current API it is impossible to achieve something satisfactory in user-land, and still use an uncontrolled component. If the `renderInput` callback was provided the selected option as a second arg, we would at least be able to cover this base.
For completeness I note that if you use state, you can make even that last use-case work. However enabling the Autocomplete component to operate as an uncontrolled component as part of a form is a big enough win for simplicity for me to feel this feature request is warranted, (in many cases is has positive performance implications too).
### Examples
```tsx
/*
* This one renders a hidden input with initial value="" and name="movie_id"
* When the user selects an option, the value is updated to getOptionValue(option)
*/
const MyComponent = () => {
const options = useGetOptions()
return <Autocomplete
name="movie_id"
options={options}
getOptionLabel={option =>`${option.title}, ${option.year}`}
getOptionValue={option => option.id}
renderInput={params => (
<TextField
{...params}
label="Movie"
/>
)}
/>
}
/*
* This one renders one hidden input for each selected option with value=getOptionValue(option) and name="movie_id[]"
* When the user selects an option, the value is updated to getOptionValue(option)
*/
const MyMultiSelectComponent = () => {
const options = useGetOptions()
return <Autocomplete
name="movie_id"
options={options}
multiple
getOptionLabel={option =>`${option.title}, ${option.year}`}
getOptionValue={option => option.id}
renderInput={params => (
<TextField
{...params}
label="Movie"
/>
)}
/>
}
```
### Motivation
With React Router data router patterns, (or Remix) we are able to simplify some aspects of writing forms in React apps by using the normal HTML form pattern. With autocomplete components this is tricky.
**Search keywords**: uncontrolled autocomplete | new feature,package: material-ui,component: autocomplete,package: base-ui | low | Major |
2,499,023,189 | godot | Resource in @export Dictionary is labelled object | ### Tested versions
Godot v4.3.stable
### System information
Godot v4.3.stable - Nobara Linux 40 (GNOME Edition) - Wayland - Vulkan (Forward+) - integrated Intel(R) Xe Graphics (TGL GT2) - 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (8 Threads)
### Issue description
This type label says Object instead of Resource.
All the options that appear when clicking object are Resources.
You cannot use an Object in an `@export var`, just Resources (and others).

### Steps to reproduce
Create a Node with a script containing
```gdscript
@export var d: Dictionary
```
Create and edit an entry.
A list of options (types) appears. Here is the 'Object' label.
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/16825833/mrp.zip)
| discussion,topic:editor | low | Minor |
2,499,024,089 | godot | (macOS) Godot Editor forgets about GDExtension properties on extension reload, depending on the .gdextension file. | ### Tested versions
Reproducible in Godot 4.3.
### System information
Godot v4.3.stable - macOS 13.6.7 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 XT - 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz (16 Threads)
I have not tested 4.4-dev in-depth yet.
### Issue description
The format of the .gdextension file currently affects how 'well' the Godot editor is able to reload a GDExtension.
In the case that it is 'improperly' reloaded, the editor forgets about all properties / functions of the extension. Class names are retained. When the game is launched, it works as expected in all cases.
**Example of an error screen:**
```
--- Debugging process stopped ---
res://main.gd:6 - Parse Error: Static function "test_function()" not found in base "GDScriptNativeClass".
res://main.gd:6 - Parse Error: Static function "test_function()" not found in base "GDScriptNativeClass".
[...]
```


### Cases
At this point, I am _reasonably_ sure I can properly categorize the behavior into 3 camps:
When pointing directly to a `.framework` bundle, in all cases the editor forgets about the properties after a single click. It probably reloads instantly and fails the reload:
```ini
# All these cases behave the same way:
macos/numdot.macos.template_debug.x86_64.framework
./macos/numdot.macos.template_debug.x86_64.framework
res://addons/numdot/macos/numdot.macos.template_debug.x86_64.framework
res://./addons/numdot/macos/numdot.macos.template_debug.x86_64.framework
```
When pointing to a binary directly, the editor retains knowledge of the properties until the binary is rebuild. Then, it fails the reload:
```ini
# All these cases behave the same way:
macos/numdot.macos.template_debug.x86_64.framework/numdot.macos.template_debug.x86_64
res://addons/numdot/macos/numdot.macos.template_debug.x86_64.framework/numdot.macos.template_debug.x86_64
macos/numdot.macos.template_debug.x86_64.dylib
res://addons/numdot/macos/numdot.macos.template_debug.x86_64.dylib
```
When pointing to the binary directly, using a path that starts with `./`, knowledge of the binary is retained. However, changes to the documentation are only reloaded when the editor is unfocused and focused again:
```ini
# These cases behave the same way:
./macos/numdot.macos.template_debug.x86_64.framework/numdot.macos.template_debug.x86_64
res://./addons/numdot/macos/numdot.macos.template_debug.x86_64.framework/numdot.macos.template_debug.x86_64
./macos/numdot.macos.template_debug.x86_64.dylib
res://./addons/numdot/macos/numdot.macos.template_debug.x86_64.dylib
```
### Steps to reproduce
1) Create a blank gdextension with https://github.com/godotengine/godot-cpp
2) Register a static method via `godot::ClassDB::bind_static_method("GDExample", D_METHOD("example_function"), &GDExample::example_function);`
3) Open the godot editor
4) Run the game
5) Observe the amnesia
### Minimal reproduction project (MRP)
https://github.com/Ivorforce/gdextension-staticmethod-amnesia | bug,topic:gdextension | low | Critical |
2,499,031,691 | rust | Deadlock detected with parallel front-end in nightly | I have caught `deadlock detected` while building with `RUSTFLAGS="-Z threads=4"` in nightly.
### Code
Sorry, I cannot provide a minimal code example as I saw it only once on a big codebase.
### Meta
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
```
### Error output
```
thread '<unnamed>' panicked at compiler/rustc_query_system/src/query/job.rs:543:9:
deadlock detected
...
error: could not compile `regex-syntax` (lib)
Caused by:
process didn't exit successfully: `/home/ubuntu/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/rustc --crate-name regex_syntax --edition=2021 /home/ubuntu/.cargo/registry/src/index.crates.io-6f17d22bba15001f/regex-syntax-0.7.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=208 --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debuginfo=2 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=eaa19fd5aec82aae -C extra-filename=-eaa19fd5aec82aae --out-dir /home/ubuntu/projects/my/target/debug/deps -L dependency=/home/ubuntu/projects/my/target/debug/deps --cap-lints allow -Z threads=4` (signal: 6, SIGABRT: process abort signal)
```
<details><summary><strong>Backtrace</strong></summary>
<p>
```
0: 0x79eee978aafc - std::backtrace_rs::backtrace::libunwind::trace::h5e85954398d12ce3
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/../../backtrace/src/backtrace/libunwind.rs:104:5
1: 0x79eee978aafc - std::backtrace_rs::backtrace::trace_unsynchronized::h7416598e16d9343f
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x79eee978aafc - std::sys_common::backtrace::_print_fmt::hc921464b54ab3722
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/sys_common/backtrace.rs:68:5
3: 0x79eee978aafc - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hac66d34f6a85c7ee
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/sys_common/backtrace.rs:44:22
4: 0x79eee97ddc80 - core::fmt::rt::Argument::fmt::hbdd6577954000d6e
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/core/src/fmt/rt.rs:142:9
5: 0x79eee97ddc80 - core::fmt::write::h6c4f087e5a6c1ba2
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/core/src/fmt/mod.rs:1120:17
6: 0x79eee977e9af - std::io::Write::write_fmt::hac30ac3c892ea11e
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/io/mod.rs:1762:15
7: 0x79eee978a8e4 - std::sys_common::backtrace::_print::h71743d6b3e3b1e41
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/sys_common/backtrace.rs:47:5
8: 0x79eee978a8e4 - std::sys_common::backtrace::print::h7f1e7fb2369c92a0
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/sys_common/backtrace.rs:34:9
9: 0x79eee978d577 - std::panicking::default_hook::{{closure}}::hf1c8571d28948191
10: 0x79eee978d2df - std::panicking::default_hook::hd959e15d96d8333b
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/panicking.rs:292:9
11: 0x79eeec4f3410 - std[5f6dbc7992e36f36]::panicking::update_hook::<alloc[73e8f31ebd2d06b4]::boxed::Box<rustc_driver_impl[1b37cd9153daf3ad]::install_ice_hook::{closure#0}>>::{closure#0}
12: 0x79eee978dcb8 - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::h583f85f885642a98
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/alloc/src/boxed.rs:2021:9
13: 0x79eee978dcb8 - std::panicking::rust_panic_with_hook::h51b7b3de85b330a5
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/panicking.rs:783:13
14: 0x79eee978d9d9 - std::panicking::begin_panic_handler::{{closure}}::h9f9c9a467f60d8ba
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/panicking.rs:649:13
15: 0x79eee978afc6 - std::sys_common::backtrace::__rust_end_short_backtrace::h037a6ad83e4b9233
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/sys_common/backtrace.rs:171:18
16: 0x79eee978d772 - rust_begin_unwind
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/panicking.rs:645:5
17: 0x79eee97da365 - core::panicking::panic_fmt::h9cec6616f663903f
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/core/src/panicking.rs:72:14
18: 0x79eeecdc2575 - rustc_query_system[886bde11500d09fb]::query::job::deadlock
19: 0x79eeec4eb63c - std[5f6dbc7992e36f36]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[c52b5cc031b18b22]::util::run_in_thread_pool_with_globals<rustc_interface[c52b5cc031b18b22]::interface::run_compiler<core[b21e5049fb800f2c]::result::Result<(), rustc_span[2f0fd5f8ae050e80]::ErrorGuaranteed>, rustc_driver_impl[1b37cd9153daf3ad]::run_compiler::{closure#0}>::{closure#0}, core[b21e5049fb800f2c]::result::Result<(), rustc_span[2f0fd5f8ae050e80]::ErrorGuaranteed>>::{closure#2}::{closure#1}, ()>
20: 0x79eeec4f5661 - <<std[5f6dbc7992e36f36]::thread::Builder>::spawn_unchecked_<rustc_interface[c52b5cc031b18b22]::util::run_in_thread_pool_with_globals<rustc_interface[c52b5cc031b18b22]::interface::run_compiler<core[b21e5049fb800f2c]::result::Result<(), rustc_span[2f0fd5f8ae050e80]::ErrorGuaranteed>, rustc_driver_impl[1b37cd9153daf3ad]::run_compiler::{closure#0}>::{closure#0}, core[b21e5049fb800f2c]::result::Result<(), rustc_span[2f0fd5f8ae050e80]::ErrorGuaranteed>>::{closure#2}::{closure#1}, ()>::{closure#1} as core[b21e5049fb800f2c]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
21: 0x79eee9797b75 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h60c39ebe8387f1c8
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/alloc/src/boxed.rs:2007:9
22: 0x79eee9797b75 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h1ab2eeceecb887d4
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/alloc/src/boxed.rs:2007:9
23: 0x79eee9797b75 - std::sys::unix::thread::Thread::new::thread_start::h5193a614b38f3ff0
at /rustc/2f8d81f9dbac6b8df982199f69da04a4c8357227/library/std/src/sys/unix/thread.rs:108:17
24: 0x79eee9494ac3 - <unknown>
25: 0x79eee9526850 - <unknown>
26: 0x0 - <unknown>
```
</p>
</details>
| I-ICE,T-compiler,C-bug,WG-compiler-parallel,S-needs-repro | low | Critical |
2,499,034,712 | godot | MacOSX OS.get_enviroment("PATH") and OS.execute() do not use correct env path, but use a a different version, thereby running an app from wrong path. | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Mac OS X Catalina 10.15
### Issue description
On Mac OSX, the Environment **PATH** is different when retrieving it using gdscript than from the **BASH** command line. This cause confusion if you are for example running `OS.execute("which",["awk"], value, true, true)` to locate an application to execute. This issue doesn't appear on Linux using same commands.
To verify I copied a test version of `/usr/bin/awk` to `/usr/local/bin/awk` and then used BASH commands followed by gdscript.
**First tested from command line:**
MacOSX Bash commands to determine shell and path
```
~$ echo $SHELL
/bin/bash
~$ echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:
~$ which awk
/usr/local/bin/awk
```
**Now test with Godot**
same commands using gdscript. PATH comes up different, even though the shell is BASH.
```gdscript
print(OS.get_environment ("SHELL")) #/bin/bash
print(OS.get_environment ("PATH")) #/usr/bin:/bin:/usr/sbin:/sbin
var value=[]
var err=OS.execute("which",["awk"], value, true, true)
prints(err, value) #0 ["/usr/bin/awk\n"]
```
### Steps to reproduce
see above
### Minimal reproduction project (MRP)
na | bug,platform:macos,topic:porting | low | Minor |
2,499,040,936 | godot | [Linux] Popup menus staying open after switching to another window | ### Tested versions
Reproducible in:
- 4.2.2.stable.official.15073afe3
- 4.3.stable.arch_linux
- 4.4.dev1.official.28a72fa43
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Mon, 19 Aug 2024 17:02:39 +0000 - Wayland - Vulkan (Forward+) - integrated Intel(R) HD Graphics 520 (SKL GT2) - Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz (4 Threads)
### Issue description
When opening any popup menu, and then opening any other window or minimizing the current one, the menu stays on until you switch to Godot and close it.
Expected behavior: menu gets closed immediately after switching / minimizing.


### Steps to reproduce
- Trigger any action that opens a pop-up menu (for example, right-clicking on "Filter projects" in the Project Manager)
- Minimize the Godot window or switch to another one
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:editor,confirmed | low | Minor |
2,499,051,950 | create-react-app | BSA App | 
Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
(Write your answer here.)
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
(Describe your proposed solution here.)
Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
(crear Una Cuenta )
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
(Entrar A Mi BSA)
COPYRIGHT Reversed ยฉBanco San Antero 2024
| issue: proposal,needs triage | low | Minor |
2,499,065,852 | godot | Light2D only has a brightness precision of .01 | ### Tested versions
- Reproducible in: v4.2.2.stable [15073afe3] and later
- Likely Reproducible in all 4.x versions and possibly present in v3.x
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 31.0.15.5123) - Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (12 Threads)
### Issue description
Whenever changing the energy or brightness (_of the color_) for a Light2D, it can only display a step of 0.01, and when trying to transition from 0 to 1 smoothly it noticeably steps.
A workaround I originally thought of was dividing the DirectionalLight2D's color.v brightness and increasing the max of the energy, but it did not change the precision.
### Steps to reproduce
Run the MRP and watch as the light noticeably steps between energy 0 and 9.
You can change the Max Energy to 0.9 and set the DirectionalLight2D's color to max brightness, and see the exact same behavior. There is no workaround for getting a smooth transition between minimum brightness and max brightness
**Main.gd**:
```
extends Node2D
@onready var sun: DirectionalLight2D = get_node("Sun")
## You can change the below value to .9, change the Sun's color and energy
## accordingly, and it will be the same.
@export var max_energy = 9.0
var speed = max_energy/45.0
func _process(delta: float) -> void:
sun.energy = (sun.energy + delta * speed)
if sun.energy >= max_energy:
speed = -max_energy/45.0
elif sun.energy <= 0:
speed = max_energy/45.0
print(sun.energy)
```
### Minimal reproduction project (MRP)
[light2d_precision_error.zip](https://github.com/user-attachments/files/16826170/light2d_precision_error.zip) | discussion,topic:rendering,topic:2d | low | Critical |
2,499,070,847 | yt-dlp | rule34video - resolution incorrectly extracted as `quality` | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
`yt-dlp --list-formats https://rule34video.com/video/3065296/lara-in-trouble-ep-7-wildeerstudio/`
Available formats always Unknown
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip)
[debug] Python 3.10.7 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1q 5 Jul 2022)
[debug] exe versions: ffmpeg N-112480-g644b2235c5-20231019 (setts), ffprobe N-112480-g644b2235c5-20231019
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2024.02.02, mutagen-1.46.0, requests-2.32.3, sqlite3-3.37.2, urllib3-1.26.18, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[Rule34Video] Extracting URL: https://rule34video.com/video/3065296/lara-in-trouble-ep-7-wildeerstudio/
[Rule34Video] 3065296: Downloading webpage
[debug] Sort order given by user: proto:https
[debug] Formats sorted by: hasvid, ie_pref, proto:https(10), lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, vext, aext, hasaud, source, id
[info] Available formats for 3065296:
ID EXT RESOLUTION โ PROTO โ VCODEC ACODEC
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
0 mp4 unknown โ https โ unknown unknown
1 mp4 unknown โ https โ unknown unknown
2 mp4 unknown โ https โ unknown unknown
3 mp4 unknown โ https โ unknown unknown
```
| NSFW,site-bug,triage | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.