id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,501,014,784
transformers
Community contribution: Adding GGUF support for more architectures
### Feature request Recently, we have added the ability to load `gguf` files within [transformers](https://huggingface.co/docs/hub/en/gguf). <img src="https://github.com/user-attachments/assets/61df6455-6016-449e-a37f-9dfc7f918902" width="600"> The goal was to offer the possibility to users to further train/fine-tune their gguf models. <details> <summary>See Workflow</summary> 1) Load gguf file in transformers: we dequantize the weights to fp32, then we load the weights to be used with PyTorch. ```py from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF" filename = "tinyllama-1.1b-chat-v1.0.Q6_K.gguf" tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename) model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename) ``` 2) train/finetune 3) Convert the model back to gguf to use in the ggml ecosystem using [convert_hf_to_gguf](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) script or using [gguf-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space if you pushed your model on the hub : ```py tokenizer.save_pretrained('directory') model.save_pretrained('directory') !python ${path_to_llama_cpp}/convert-hf-to-gguf.py ${directory} ``` </details> Let's try to add GGUF support for more architectures! Currently supported architectures are - [x] Llama - [x] Mistral - [x] Qwen2 It would be great to add the support for more architectures such as - [x] Phi3 https://github.com/huggingface/transformers/pull/31844 - [x] Qwen2Moe https://github.com/huggingface/transformers/pull/33264 - [ ] Gemma2 - [ ] T5 https://github.com/huggingface/transformers/pull/33389 - [x] Falcon https://github.com/huggingface/transformers/pull/33437 - [x] Bloom https://github.com/huggingface/transformers/pull/33473 - [ ] Codestrall - [ ] dbrx - [ ] Deepseek (when it will be added to transformers) - [x] StableLM https://github.com/huggingface/transformers/pull/33793 - [x] gpt2 https://github.com/huggingface/transformers/pull/34044 - [x] starcoder2 https://github.com/huggingface/transformers/pull/34094 ... and many more (Feel free to suggest more architectures ! The model needs to integrated in transformers) Adding this feature would require to follow the same protocol as in this [PR](https://github.com/huggingface/transformers/pull/31175/files) : 1) Update `GGUF_TENSOR_MAPPING` and `GGUF_CONFIG_MAPPING` in order to map the tensor/config of the gguf file to the one on transformers. 2) Create a `GGUFXXXConverter(XXXConverter)` class to convert the gguf tokenizer to a transformers one. 3) Write tests If you are interested to take up the challenge, comment below with the architecture name you want to integrate and open a PR! Once you open a PR, feel free to ping @SunMarc @LysandreJik @ArthurZucker for a review ! ### Motivation Support for more gguf models ### Your contribution Reviewing PRs and possibly adding the support for more models
Good Second Issue,Feature request
medium
Critical
2,501,017,096
next.js
pages router global _error.tsx is triggered instead of app router global-error.tsx when force-static page used
### Link to the code that reproduces this issue https://github.com/holubiev/nextjs-global-error-reproduction-app ### To Reproduce 1. Build production app and start it: `yarn build` && `yarn start` 2. Visit some of [domain]/[lang] dynamic route: _localhost:3000/example/en_ 3. See browser renders page router __error.tsx_ page instead of app router _global-error.tsx_ ### Current vs. Expected behavior Current Behavior: When using the app directory in Next.js with an app router and defining a static page using `export const dynamic = 'force-static'` in _page.tsx_, any error thrown on this page is unexpectedly handled by the Page Router's global error handler (__error.tsx_), instead of being caught by the App Router's global error handler (`global-error.tsx`). This behavior is inconsistent with expectations, as the error handling seems to cross over between different routing mechanisms (App Router and Page Router), which is not documented anywhere in the Next.js documentation. Expected Behavior: When an error is thrown on a page using the App Router in the app directory, particularly with a static page (force-static), it should be handled by the App Router's designated global error handler (_global-error.tsx_). The Page Router's global error handler (__error.tsx_) should not be triggered at all, ensuring that error handling is correctly scoped to the routing mechanism in use. The documentation should clearly mention how force-static pages handle errors, specifically when using the App Router. This unexpected behavior leads to confusion and makes error handling inconsistent when using the App Router with statically generated pages. ### Provide environment information ```bash Operating System: Platform: darwin Arch: x64 Version: Darwin Kernel Version 22.6.0: Mon Jun 24 01:25:37 PDT 2024; root:xnu-8796.141.3.706.2~1/RELEASE_X86_64 Available memory (MB): 8192 Available CPU cores: 4 Binaries: Node: 18.19.1 npm: 10.2.4 Yarn: 1.22.22 pnpm: 9.9.0 Relevant Packages: next: 15.0.0-canary.138 // Latest available version is detected (15.0.0-canary.138). eslint-config-next: N/A react: 19.0.0-rc-7771d3a7-20240827 react-dom: 19.0.0-rc-7771d3a7-20240827 typescript: 5.3.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Documentation, Module Resolution, Pages Router, Runtime ### Which stage(s) are affected? (Select all that apply) next start (local) ### Additional context Our use case requires throwing errors from force-static server components and handling these errors directly in the server environment. This is necessary because certain errors require server-side actions, such as performing a server redirect or returning a specific status code on the response page. When using `force-static` server component pages, we must ensure that static HTML is not generated and stored on disk, as this could lead to unnecessary storage usage for pages that are supposed to handle errors dynamically. Functions like `notFound()` or `redirect()` do not address our needs in this scenario because they generate static HTML, which is contrary to our goal of preventing static page generation for error cases. Currently, the Page Router's `_error.tsx` page provides a workaround by allowing server-side redirects within `Error.getInitialProps`. However, it is unclear whether this behavior will continue to be supported in future versions of Next.js, as it is not documented. Our primary concern is whether Next.js will continue to support handling errors from React Server Components (RSCs) in a server-side context in Next.js 15. This is particularly important for cases where error handling requires server logic, such as conditional redirects or dynamic response statuses. Is there a plan for Next.js to handle errors from force-static RSCs in the App Router in a way that aligns with these requirements?
Runtime,Pages Router,linear: next,Module Resolution
low
Critical
2,501,032,345
godot
Loading large RichtTextLabel takes a long time
### Tested versions 4.3.stable ### System information Pop!_OS 22.04 ### Issue description My app starting more slowly with every version I released. I've been debugging it for most of the day only to find out that my change-log was the culprit. I've been using a RichtTextLabel and just added to this label with every release. Right now loading the scene which includes the release notes takes about 6 seconds on my 11th gen i7. ### Steps to reproduce 1. Add a RichtTextLabel to a scene. 2. Create a nice Lorem Ipsum and copy it a few dozen times into this RichTextLabel. 3. See your scene loading slower with every paragraph added. ### Minimal reproduction project (MRP) [RichTextLabel.zip](https://github.com/user-attachments/files/16838332/RichTextLabel.zip)
bug,needs testing,topic:gui,performance
low
Critical
2,501,046,653
flutter
Offer a deep linking solution for the News toolkit
### Use case As an active user of the News Toolkit, I’ve used it twice, and on both occasions, clients specifically requested the implementation of a deep linking solution. In my experience, deep linking is an essential feature for news applications to enhance user engagement and provide a seamless user experience. A common scenario is when a user shares a link to a news article, but instead of opening directly in the app, the link redirects to the web, even if the app is installed. I believe adding deep linking functionality would increase the value of the toolkit. ### Proposal This feature request stems from an existing PR I created for the News Toolkit. The toolkit currently uses Firebase Dynamic Links to handle sign-in with email links. However, with the announcement that Dynamic Links are being deprecated, I created a PR that adds an abstraction for the deep link client, DeepLinkClient, and implements this abstraction using the current Dynamic Links setup. The abstraction would enable an easy transition from Dynamic Links to a new solution. But then Firebase communicated that this functionality will be replaced, not deprecated. Still, adding the abstraction could be a first step toward integrating deep link handling into the toolkit. My proposal is to build on top of the existing PR ([#1158](https://github.com/flutter/news_toolkit/pull/1158)) and implement the simplest form of deep link handling for the News Toolkit: opening a news article based on the link that opened the app. The most common scenario would involve extracting the slug from the link, which is usually the string following the last forward slash. For example: ``` https://blog.google/technology/ai/google-io-2024-100-announcements/ ``` The slug for this link would be google-io-2024-100-announcements. The initial scope of this feature would be to extract the slug from the link and add a search by slug API to the dart frog server.
c: proposal,team-news
low
Minor
2,501,047,512
awesome-mac
🎉 Add Blip
### 🪩 Provide a link to the proposed addition https://blip.net/ ### 😳 Explain why it should be added In cases where people have a Mac but not an iPhone, or have an iPad but not a Mac, this app helps us send files seamlessly through various devices and covers many such cases. ### 📖 Additional context _No response_ ### 🧨 Issue Checklist - [X] I have checked for other similar issues - [X] I have explained why this change is important - [ ] I have added necessary documentation (if appropriate)
addition
low
Minor
2,501,058,980
storybook
[Bug]: Control loses focus if subcomponents are defined
### Describe the bug Declaring a subcomponent for a component makes controls on the `docs` page lose focus when updating values. This is the most apparent when typing, but any value change causes the control to lose focus. I forked [the vite + react + ts stackblitz](https://stackblitz.com/github/storybookjs/sandboxes/tree/next/react-vite/default-ts/after-storybook?file=README.md&preset=node) and the only thing I altered was to add a subcomponent for the `Button` component. ### Reproduction link https://stackblitz.com/edit/github-ygdrxf ### Reproduction steps 1. Go to above link 2. Go to Button docs page 3. Try to type in `label` control ``` Expected: Typing to work as per normal Actual: Loses focus whenever a character is typed/removed ``` ### System ```bash Storybook Environment Info: System: OS: Linux 5.0 undefined CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz Shell: 1.0 - /bin/jsh Binaries: Node: 18.20.3 - /usr/local/bin/node Yarn: 1.22.19 - /usr/local/bin/yarn npm: 10.2.3 - /usr/local/bin/npm <----- active pnpm: 8.15.6 - /usr/local/bin/pnpm npmPackages: @storybook/addon-essentials: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/addon-interactions: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/addon-links: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/addon-onboarding: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/blocks: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/react: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/react-vite: ^8.3.0-beta.1 => 8.3.0-beta.1 @storybook/test: ^8.3.0-beta.1 => 8.3.0-beta.1 storybook: ^8.3.0-beta.1 => 8.3.0-beta.1 ``` ### Additional context _No response_
bug,needs triage
low
Critical
2,501,062,504
node
Reuse Blob Registry in Workers
Node.js currently lacks support for sharing blob object URLs within Workers. This feature request aims to enable that capability, allowing blob URLs to be used in Workers. Implementing this change would also integrate well #54647. Ref: <https://openjs-foundation.slack.com/archives/C053UCCP940/p1725223124157009?thread_ts=1725214637.560639&cid=C053UCCP940> Ref: #54647 --- Currently, most browser runtimes support sharing Blobs in web workers, however, Deno does not support this behavior.
feature request,worker
low
Minor
2,501,078,138
rust
-Csoft-float flag is unsound
Current status: all use of the flag causes a warning (will be shipped in 1.83), announcing that it will become a hard error in the future. --- @bjorn3 just made me aware of this amazing flag: ``` -C soft-float=val -- use soft float ABI (*eabihf targets only) (default: no) ``` This is quite unsound: if code gets compiled with `-Csoft-float` and calls code from the standard library that uses the hard float ABI, we have UB. Generally we need different target triples for softfloat vs hardfloat ABIs, since (as per the discussion in https://github.com/rust-lang/lang-team/issues/235) code within a single target should be ABI compatible. Cargo even (unstably) allows overwriting `RUSTFLAGS` on a per-crate basis, so we better make sure crates compiled with different flags can be linked with each other. This was added a looooong time ago in https://github.com/rust-lang/rust/pull/9617. I couldn't find any discussion regarding its soundness. We have e.g. `arm-unknown-linux-musleabi` and `arm-unknown-linux-musleabihf`, so using the `*hf` target but with `-Csoft-float` also seems kind of unnecessary. (But I have not checked whether all `eabihf` targets have a corresponding `eabi` target.) According to the documentation, this can only be used by ARM targets. So paging in some ARM folks -- is this used in practice, and if yes, how do people avoid the soundness problems? @rustbot ping arm Note that this issue is **not about `-Ctarget-feature=+soft-float`**, see https://github.com/rust-lang/rust/issues/116344 for that.
A-LLVM,A-codegen,O-Arm,P-medium,T-compiler,I-unsound,A-ABI
medium
Critical
2,501,084,257
pytorch
_rebuild_sparse_tensor
### 🚀 The feature, motivation and pitch I am using a distribution framework to accelerate training process. But when I transfer an object contains Sparse CSR, An error occured: ``` ray.exceptions.RaySystemError: System error: rebuilding sparse tensor for layout torch.sparse_csr traceback: Traceback (most recent call last): File "~/.conda/envs/pytorch_base/lib/python3.10/site-packages/torch/_utils.py", line 210, in _rebuild_sparse_tensor raise NotImplementedError("rebuilding sparse tensor for layout %s" % (layout)) NotImplementedError: rebuilding sparse tensor for layout torch.sparse_csr ``` Since the interface is not custom, but provided for many other modules, we have to give up the module using sparse tensors which broke our workflow. ### Alternatives _No response_ ### Additional context _No response_ cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
module: sparse,triaged
low
Critical
2,501,084,929
ui
[bug]: input-otp type error on line 38
### Describe the bug Namely this line ⬇️ ```js const { char, hasFakeCaret, isActive } = inputOTPContext.slots[index] ``` Gives the following ts error on each property destructured: ```bash Property 'char' does not exist on type 'SlotProps | undefined'.ts(2339) const char: any ``` I solved it by marking it as existant (`const { char, hasFakeCaret, isActive } = inputOTPContext.slots[index]!`), but I gues that's cheating ts in some sense, so I figured I'd report it just in case. It happened for me when I tried to build my turborepo, which is a private repo of my organization. I'll put the logs down just in case. ### Affected component/components input-otp ### How to reproduce Just install it and go to the ui/ folder open the input-otp.tsx and the type error should arise. ### Codesandbox/StackBlitz link https://github.com/JohnFScha/monorepoFlexy ### Logs ```bash ┌ argentina#build > cache miss, executing f63b4d66d26ab565 │ │ > argentina@1.0.0 build C:\Users\user1\Desktop\Monorepo\monorepoFlexy\apps\argentina │ > next build │ │ ▲ Next.js 14.2.4 │ - Environments: .env │ │ Creating an optimized production build ... │ ✓ Compiled successfully │ Linting and checking validity of types ... ⨯ ESLint: Cannot read config file: C:\Users\user1\Desktop\Monorepo\monorepoFle │ xy\apps\argentina\.eslintrc.js Error: require() of ES Module C:\Users\user1\Desktop\Monorepo\monorepoFlexy\apps\argentina\.es │ lintrc.js from C:\Users\user1\Desktop\Monorepo\monorepoFlexy\node_modules\.pnpm\@eslint+eslintrc@2.1.4\node_modules\@eslint\e │ slintrc\dist\eslintrc.cjs not supported. .eslintrc.js is treated as an ES module file as it is a .js file whose nearest paren │ t package.json contains "type": "module" which declares all .js files in that package scope as ES modules. Instead either ren │ ame .eslintrc.js to end in .cjs, change the requiring code to use dynamic import() which is available in all CommonJS modules │ , or change "type": "module" to "type": "commonjs" in C:\Users\user1\Desktop\Monorepo\monorepoFlexy\apps\argentina\package.js │ on to treat all .js files as CommonJS (using .Failed to compile.ules instead). │ Linting and checking validity of types ... │ ../../packages/ui/src/components/ui/input-otp.tsx:38:11 │ Type error: Property 'char' does not exist on type 'SlotProps | undefined'. │ │ 36 | >(({ index, className, ...props }, ref) => { │ 37 | const inputOTPContext = React.useContext(OTPInputContext) │ > 38 | const { char, hasFakeCaret, isActive } = inputOTPContext.slots[index] │ | ^ │ 39 | │ 40 | return ( │ 41 | <div │  ELIFECYCLE  Command failed with exit code 1. │ │ command finished with error: command (C:\Users\user1\Desktop\Monorepo\monorepoFlexy\apps\argentina) C:\Users\user1\AppData\Lo │ cal\pnpm\pnpm.exe run build exited (1) ``` ### System Info ```bash I'm running on windows 11 pro, v10.0.22631 compilation 22631 16GB ram AMD Ryzen 5 4600G with Radeon Graphics, 3701 Mhz, 6 main cores, 12 logical cores ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,501,095,060
pytorch
Dynamo recompiles code on every call when arguments include nn.Module
### 🐛 Describe the bug When I `torch.compile` some code that receives an argument of type nn.Module, the recompilation is triggered on each call with a different instance. Expected it to recompile only some attributes relevant for the code are different. Test code: ```python import torch import torch.nn as nn from torch import Tensor import logging # torch._logging.set_logs(guards=True) torch._logging.set_logs(recompiles=True, recompiles_verbose = True, fusion = True) Mod = nn.Module class A(Mod): def __init__(self, v:float): super().__init__() self.b = v def forward(self, x): return (x.sin()+self.b).sin() def forward(net, x:Tensor) -> Tensor: return net.forward(x).sum() dev = torch.device('cuda:0') compile_args = dict(fullgraph=True, dynamic = True, backend = "inductor", options={'force_same_precision':True, 'disable_cpp_codegen':False, 'trace.graph_diagram':True, "triton.cudagraphs": False}) cforward = torch.compile(forward, **compile_args) def compute(net): y = cforward(net, x) y.backward() start_e = torch.cuda.Event(enable_timing=True) end_e = torch.cuda.Event(enable_timing=True) x = torch.rand((100,100,100), device =dev, requires_grad=True) # for d in[1,2]: # torch._dynamo.mark_dynamic(x, d) nets = [A(float(1.0)) for j in range(3)] # nets = [nn.Sequential(*[A(float(j)) for j in range(3)]) for i in range(3)] # WARMUP/ COMPILE s = torch.cuda.Stream(device=dev) s.wait_stream(torch.cuda.current_stream()) with torch.cuda.device(dev): with torch.cuda.stream(s): # warmup on a side stream, according to examples # check correctness for i in range(3): y = forward(nets[i], x) y.backward() # warmup compiled for i in range(3): print(f"_______________Net {i}_______________") x.grad = None start_e.record() compute(nets[i]) end_e.record() torch.cuda.synchronize() t = start_e.elapsed_time(end_e) print(f'Time: {t} ms') print(x.grad.mean()) torch.cuda.current_stream().wait_stream(s) G = torch.cuda.CUDAGraph() with torch.cuda.graph(G): x.grad = None for i in range(3): compute(nets[i]) for i in range(3): print(f"_______________Compiled Graphed Iteration {i}_______________") tt = 0 for j in range(100): start_e.record() G.replay() end_e.record() torch.cuda.synchronize() t = start_e.elapsed_time(end_e) tt += t print(f'Time: {tt/100} ms') print(x.grad.mean()) ``` It prints ``` _______________Net 0_______________ V0902 15:57:19.824000 140183681587008 torch/_inductor/scheduler.py:1820] [0/0] [__fusion] ===== attempting fusion (1/10): 2 nodes ===== V0902 15:57:19.825000 140183681587008 torch/_inductor/scheduler.py:627] [0/0] [__fusion] cannot fuse buf0 with buf1: numel/rnumel mismatch (reduce) (123, 1), (((s0**3 + 122)//123), 123) V0902 15:57:19.826000 140183681587008 torch/_inductor/scheduler.py:2133] [0/0] [__fusion] found 0 possible fusions V0902 15:57:19.827000 140183681587008 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] completed fusion round (1/10): fused 2 nodes into 2 nodes V0902 15:57:19.827000 140183681587008 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] V0902 15:57:19.827000 140183681587008 torch/_inductor/scheduler.py:1834] [0/0] [__fusion] ===== fusion complete (1 iterations) ===== V0902 15:57:21.398000 140183681587008 torch/_inductor/scheduler.py:1820] [0/0] [__fusion] ===== attempting fusion (1/10): 1 nodes ===== V0902 15:57:21.399000 140183681587008 torch/_inductor/scheduler.py:2133] [0/0] [__fusion] found 0 possible fusions V0902 15:57:21.399000 140183681587008 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] completed fusion round (1/10): fused 1 nodes into 1 nodes V0902 15:57:21.399000 140183681587008 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] V0902 15:57:21.399000 140183681587008 torch/_inductor/scheduler.py:1834] [0/0] [__fusion] ===== fusion complete (1 iterations) ===== Time: 4855.01953125 ms tensor(0.1223, device='cuda:0') _______________Net 1_______________ V0902 15:57:22.948000 140183681587008 torch/_inductor/scheduler.py:1820] [0/1] [__fusion] ===== attempting fusion (1/10): 2 nodes ===== V0902 15:57:22.949000 140183681587008 torch/_inductor/scheduler.py:627] [0/1] [__fusion] cannot fuse buf0 with buf1: numel/rnumel mismatch (reduce) (123, 1), (((s0**3 + 122)//123), 123) V0902 15:57:22.950000 140183681587008 torch/_inductor/scheduler.py:2133] [0/1] [__fusion] found 0 possible fusions V0902 15:57:22.950000 140183681587008 torch/_inductor/scheduler.py:1827] [0/1] [__fusion] completed fusion round (1/10): fused 2 nodes into 2 nodes V0902 15:57:22.950000 140183681587008 torch/_inductor/scheduler.py:1827] [0/1] [__fusion] V0902 15:57:22.950000 140183681587008 torch/_inductor/scheduler.py:1834] [0/1] [__fusion] ===== fusion complete (1 iterations) ===== V0902 15:57:23.437000 140183681587008 torch/_inductor/scheduler.py:1820] [0/1] [__fusion] ===== attempting fusion (1/10): 1 nodes ===== V0902 15:57:23.437000 140183681587008 torch/_inductor/scheduler.py:2133] [0/1] [__fusion] found 0 possible fusions V0902 15:57:23.437000 140183681587008 torch/_inductor/scheduler.py:1827] [0/1] [__fusion] completed fusion round (1/10): fused 1 nodes into 1 nodes V0902 15:57:23.437000 140183681587008 torch/_inductor/scheduler.py:1827] [0/1] [__fusion] V0902 15:57:23.438000 140183681587008 torch/_inductor/scheduler.py:1834] [0/1] [__fusion] ===== fusion complete (1 iterations) ===== Time: 1210.2095947265625 ms tensor(0.1223, device='cuda:0') _______________Net 2_______________ V0902 15:57:24.068000 140183681587008 torch/_inductor/scheduler.py:1820] [0/2] [__fusion] ===== attempting fusion (1/10): 2 nodes ===== V0902 15:57:24.069000 140183681587008 torch/_inductor/scheduler.py:627] [0/2] [__fusion] cannot fuse buf0 with buf1: numel/rnumel mismatch (reduce) (123, 1), (((s0**3 + 122)//123), 123) V0902 15:57:24.070000 140183681587008 torch/_inductor/scheduler.py:2133] [0/2] [__fusion] found 0 possible fusions V0902 15:57:24.070000 140183681587008 torch/_inductor/scheduler.py:1827] [0/2] [__fusion] completed fusion round (1/10): fused 2 nodes into 2 nodes V0902 15:57:24.070000 140183681587008 torch/_inductor/scheduler.py:1827] [0/2] [__fusion] V0902 15:57:24.070000 140183681587008 torch/_inductor/scheduler.py:1834] [0/2] [__fusion] ===== fusion complete (1 iterations) ===== V0902 15:57:24.479000 140183681587008 torch/_inductor/scheduler.py:1820] [0/2] [__fusion] ===== attempting fusion (1/10): 1 nodes ===== V0902 15:57:24.479000 140183681587008 torch/_inductor/scheduler.py:2133] [0/2] [__fusion] found 0 possible fusions V0902 15:57:24.480000 140183681587008 torch/_inductor/scheduler.py:1827] [0/2] [__fusion] completed fusion round (1/10): fused 1 nodes into 1 nodes V0902 15:57:24.480000 140183681587008 torch/_inductor/scheduler.py:1827] [0/2] [__fusion] V0902 15:57:24.480000 140183681587008 torch/_inductor/scheduler.py:1834] [0/2] [__fusion] ===== fusion complete (1 iterations) ===== Time: 1076.0277099609375 ms tensor(0.1223, device='cuda:0') _______________Compiled Graphed Iteration 0_______________ Time: 0.20978688031435014 ms tensor(0.3670, device='cuda:0') _______________Compiled Graphed Iteration 1_______________ Time: 0.20953951954841613 ms tensor(0.3670, device='cuda:0') _______________Compiled Graphed Iteration 2_______________ Time: 0.2100307197868824 ms tensor(0.3670, device='cuda:0') ``` I expect that only Net 0 is compiled since Net 1 and Net 2 are different instances of A with the same attributes. Note, I request `recompiles=True` in logging, but it does not show anything, however from fusion attempts and the timing I can tell it tries to recompile. If I request `guards=True` in `set_logs`, I see ``` [__guards] | | +- ID_MATCH: ___check_obj_id(L['net'], 140280864840976) ``` which suggests that the compiled code is assumed valid only for exactly the same object of type inheriting nn.Module. If I let ``` Mod = object ``` Net 1 and Net 2 iterations correctly use the compiled code: ``` _______________Net 0_______________ V0902 16:06:52.766000 140478342887232 torch/_inductor/scheduler.py:1820] [0/0] [__fusion] ===== attempting fusion (1/10): 2 nodes ===== V0902 16:06:52.767000 140478342887232 torch/_inductor/scheduler.py:627] [0/0] [__fusion] cannot fuse buf0 with buf1: numel/rnumel mismatch (reduce) (123, 1), (((s0**3 + 122)//123), 123) V0902 16:06:52.768000 140478342887232 torch/_inductor/scheduler.py:2133] [0/0] [__fusion] found 0 possible fusions V0902 16:06:52.769000 140478342887232 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] completed fusion round (1/10): fused 2 nodes into 2 nodes V0902 16:06:52.769000 140478342887232 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] V0902 16:06:52.769000 140478342887232 torch/_inductor/scheduler.py:1834] [0/0] [__fusion] ===== fusion complete (1 iterations) ===== V0902 16:06:54.450000 140478342887232 torch/_inductor/scheduler.py:1820] [0/0] [__fusion] ===== attempting fusion (1/10): 1 nodes ===== V0902 16:06:54.451000 140478342887232 torch/_inductor/scheduler.py:2133] [0/0] [__fusion] found 0 possible fusions V0902 16:06:54.451000 140478342887232 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] completed fusion round (1/10): fused 1 nodes into 1 nodes V0902 16:06:54.451000 140478342887232 torch/_inductor/scheduler.py:1827] [0/0] [__fusion] V0902 16:06:54.451000 140478342887232 torch/_inductor/scheduler.py:1834] [0/0] [__fusion] ===== fusion complete (1 iterations) ===== Time: 4958.19677734375 ms tensor(0.1222, device='cuda:0') _______________Net 1_______________ Time: 1.074720025062561 ms tensor(0.1222, device='cuda:0') _______________Net 2_______________ Time: 0.5718399882316589 ms tensor(0.1222, device='cuda:0') _______________Compiled Graphed Iteration 0_______________ Time: 0.20977663949131967 ms tensor(0.3666, device='cuda:0') _______________Compiled Graphed Iteration 1_______________ Time: 0.20968575939536094 ms tensor(0.3666, device='cuda:0') _______________Compiled Graphed Iteration 2_______________ Time: 0.2093523195385933 ms tensor(0.3666, device='cuda:0') ``` And finally I found a crazy workaround to actually allow nn.Module in arguments, by duplicating the class thro re-importing it: ```python from importlib import reload def get_Mod(): mod = reload(torch.nn.modules.module) return mod.Module Mod = get_Mod() ``` This gives the same output as above for the case `Mod=object`. So it seems that dynamo has some special logic particularly for nn.Module? My real use case is compiling a block which is a list of layers inheriting from nn.Module. he network contains multiple blocks of the same kind but differing in values of some parameter / buffer tensors. There is a relevant case for the recurrent neural network as well: https://discuss.pytorch.org/t/how-to-compile-a-rnn-loop-once/191455?u=alexander_shekhovts ### Versions Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (GCC) 13.2.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.5 (main, Oct 2 2023, 09:22:39) [GCC 13.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1062-nvidia-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA DGX Display GPU 4: NVIDIA A100-SXM4-40GB Nvidia driver version: 535.183.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: AuthenticAMD Model name: AMD EPYC 7742 64-Core Processor CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 2250,0000 CPU min MHz: 1500,0000 BogoMIPS: 4491.73 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es Virtualization: AMD-V L1d cache: 2 MiB (64 instances) L1i cache: 2 MiB (64 instances) L2 cache: 32 MiB (64 instances) L3 cache: 256 MiB (16 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-127 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.1.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] triton==3.0.0 [conda] Could not collect cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec
triaged,oncall: pt2,module: dynamo,release notes: dynamo
low
Critical
2,501,100,796
TypeScript
NoInfer doesn't work in string template types
### 🔎 Search Terms NoInfer string template ### 🕗 Version & Regression Information This issue exists in every Playground-accessible version that has `NoInfer`. ### ⏯ Playground Link [here](https://www.typescriptlang.org/play/?ts=5.5.4#code/GYVwdgxgLglg9mABMAPAQQHwAoCGAuRAOTgEkxgBTAJ3QwBpEAjAtRCgDygrABMBnRHyhUYYAOaIA-IgAGVCgBs8AEgDeaAL4zEBMBQBu1AJSJVGgFDAsAIgCs1htflKA7NaMBuRAHpvbKlRwVIgAtIgQQfLQDFAAngAOFIhwwIisMALWbuaW4NDwSGK0uCwMzGlsnNz8gsKiEtJyiiqqxGSUNJhaOoh6hlQmZua+yHmwCIhFmCVpZSyVXLwCQiLiUkSk5NQoTUpqmjIYPX3GphYjoJDjhcX4s0wEbVudC9XLdWuNzi0HxwbUGEGFjENnsjm+bk8Pj8YDg-kCwTCfBAVHiIj49VCiDiiWSqXSAlB1kQAB9EFl3EA) ### 💻 Code ```ts function f<A>(a: NoInfer<A>, b: A extends string ? `rel:${A}` : never) {} f("5", "rel:7"); // error - correct, type of A is "7" ``` but ```ts function g<A>(a: A, b: A extends string ? `rel:${NoInfer<A>}` : never) {} g("5", "rel:7"); // no error - surprising - type of A is inferred to be ("5" | "7") ``` ### 🙁 Actual behavior no type error on `g()` above, because the `A` type is being inferred via the string template in spite of the `NoInfer` marker. ### 🙂 Expected behavior Both of the example calls should be recognized as wrong? ### Additional information about the issue Also tried ```ts function g<A>(a: A, b: A extends string ? NoInfer<`rel:${A}`> : never) {} function g<A>(a: A, b: NoInfer<A extends string ? `rel:${A}` : never>) {} ``` no luck.
Bug
low
Critical
2,501,101,584
godot
Begin Rotate/Scale/Translate Transformation wrapping doesn't keep the cursor inside the editor and fails.
- *Production edit: Related to https://github.com/godotengine/godot/issues/57119 and https://github.com/godotengine/godot/issues/78768.* ### Tested versions 4.3 ### System information Windows 11 - Vulkan - Nvidia RTX 4070 - intel i5 13600KF ### Issue description If you use any of the Transformation shortcuts and move your mouse outside right/left the cursor will almost always leave the editor window the movement is then not tracked thus the transformation fails to continue. ### Steps to reproduce 1. Add a shortcut to any of the transformations above in the Editor Settings (default is None). 2. Add a MeshInstance3D and assign it any mesh so you can observe when the commands work and when they don't. 3. Use the shortcut command and move the mouse outside the godot editor. 4. Observe that you don't even need to move it fast for it to not wrap properly. Expected correct result: Mouse cursor wraps - never leaves the Editor window - from right to left/left to right. ### Minimal reproduction project (MRP) None, you need to change Editor Settings.
bug,topic:editor,topic:3d
low
Minor
2,501,167,699
godot
App hangs on splash screen when running on iOS simulators
### Tested versions 4.3.stable.mono ### System information Godot v4.3.stable.mono - macOS 14.6.1 - GLES3 (Compatibility) - Apple M2 - Apple M2 (8 Threads) ### Issue description After exporting the project in Compatibility mode, running the generated Xcode project on the iPhone 13 (and iPhone 15) simulator causes the app to hang on the Godot splash screen. The following error occurs in libsystem_platform.dylib Thread 4: EXC_BAD_ACCESS (code=2, address=0x1070d0000) I have no problem running exports on an actual device but I really need to test on different screen sizes, particularly on devices where the safe area needs to be considered. I'm at the stage where my game is ready to be submitted for review and fixing any device specific issues would be far easier if I could use the simulators. ### Steps to reproduce Reproducible with a minimal Godot project. No code, just a single Node2D scene containing a ColorRect. ### Minimal reproduction project (MRP) N/A
bug,platform:ios,topic:thirdparty,needs testing,topic:export
low
Critical
2,501,172,318
TypeScript
Move to file incorrectly adds import for type available in destination file
### 🔎 Search Terms move to file self imports ### 🕗 Version & Regression Information - This is the behavior in every version I tried - I tried in 5.7.0-dev.20240826 ### ⏯ Playground Link _No response_ ### 💻 Code ```ts // file _a.ts export interface ISomething { } ``` ```ts // file _b.ts import { ISomething } from './a'; export function func(something: ISomething) { } ``` Select `func` and move it to `_a.ts` 🐛 Observe the result: ### 🙁 Actual behavior ```ts // file _a.ts import { ISomething } from './a'; // <------------------ incorrect import from itself export interface ISomething { } export function func(something: ISomething) { } ``` ### 🙂 Expected behavior ```ts // file _a.ts export interface ISomething { } export function func(something: ISomething) { } ``` ### Additional information about the issue _No response_
Bug
low
Minor
2,501,176,289
ollama
Model whitelisting for generate endpoint
I would like to support the ability to whitelist models that are usable with the generate endpoint. The model list should be provided via an env variable and be checked against before running the generate function. With this feature, the admins will gain more control over the models that can used by auto coding, f.e.: [continue ](https://www.continue.dev/) or other services. High frequent request from those applications will lead to huge delays, if the wrong model is selected.
feature request
low
Major
2,501,204,299
ui
[bug]: request to https://ui.shadcn.com/r/styles/index.json failed
### Describe the bug While trying to exec `shadn@latest init`, the index.json is timing out: ``` npx shadcn@latest init ✔ Preflight checks. ✔ Verifying framework. Found Next.js. ✔ Validating Tailwind CSS. ✔ Validating import alias. Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. request to https://ui.shadcn.com/r/styles/index.json failed, reason: connect ETIMEDOUT 76.76.21.164:443 ``` I'm able to acccess the json via web browser: ```[ { "name": "new-york", "label": "New York" }, { "name": "default", "label": "Default" } ] ``` ### Affected component/components core ### How to reproduce `npx shadcn@latest init` ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash macOS Sonoma ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
medium
Critical
2,501,206,137
vscode
Docs: Improve info on prelaunch tasks for debug configurations?
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.91.1 - OS Version: Archlinux (rolling) Steps to Reproduce: The documentation around "attach" debug configurations and "preLaunchTask" is quite limited. I want to debug a large python (fastapi) project with debugpy; and launching the project within the debugger takes over one minute. To speed up iteration, I want to just start the process, and after everything is loaded and initialized, `attach` the debugger to it - this brings down startup time to <10 sec. However I can't find good documentation on how to do this: I was able to set a preLaunchTask as: ``` { "label": "Start Chat Servers", "type": "shell", "command": "poetry run python -Xfrozen_modules=off -m uvicorn mymodule:myapp", "options": { "cwd": "${workspaceFolder}/project", "env": { "DEBUG": "true" } }, "problemMatcher": [], "isBackground": true } ] ``` And reference it in the debug config: ``` { "name": "Chat Servers", "type": "debugpy", "request": "attach", "connect": { "host": "localhost", "port": 5678, }, "preLaunchTask": "Start Chat Server", "justMyCode": false } ``` This "works", however I get the following error message: ![image](https://github.com/user-attachments/assets/53f25b0e-463b-4c12-8f94-94a4a5040f73) If I click on "debug anyway", it works as long as I gave enough time for the server to start and reach the point where we listen for the debugger. Otherwise, or if I click on "demembed my choice", it fails with a weird error: ![image](https://github.com/user-attachments/assets/92ce7292-8c85-4eed-8b9b-06a918bb65e4) I couldn't find any information on adding a timeout to the attach or on a way to wait for a specific event before trying to attach... If it is possible to do this, maybe it would be good to add some info to the docs.
bug
low
Critical
2,501,215,832
deno
Improve error message when TLS certificate is not trusted to suggest running with `DENO_TLS_CA_STORE=mozilla,system`
null
feat,cli,tls
low
Critical
2,501,229,458
flutter
[tool_crash] _TypeError: (#0 _printIncompatibleJavaAgpGradleVersionsWarning (package:flutter_tools/src/commands/create.dart:962:115))
`_getBuildGradleConfigurationFilePaths` is null with `--template=package_ffi` (and possibly other templates). cc @camsim99 ## Command ```sh flutter config --enable-native-assets flutter create --template=package_ffi my_package_2 ``` ## Steps to Reproduce 1. Make sure you have an incompatible java version 2. Run `flutter config --enable-native-assets` 3. Run `flutter create --template=package_ffi foo` ## Logs _TypeError: (#0 _printIncompatibleJavaAgpGradleVersionsWarning (package:flutter_tools/src/commands/create.dart:962:115)) ```console #0 _printIncompatibleJavaAgpGradleVersionsWarning (package:flutter_tools/src/commands/create.dart:962:115) #1 CreateCommand.runCommand (package:flutter_tools/src/commands/create.dart:526:7) <asynchronous suspension> #2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1450:27) <asynchronous suspension> #3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19) <asynchronous suspension> #4 CommandRunner.runCommand (package:args/command_runner.dart:212:13) <asynchronous suspension> #5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:421:9) <asynchronous suspension> #6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19) <asynchronous suspension> #7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5) <asynchronous suspension> #8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9) <asynchronous suspension> #9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19) <asynchronous suspension> #10 main (package:flutter_tools/executable.dart:94:3) <asynchronous suspension> ``` ```console [✓] Flutter (Channel main, 3.25.0-1.0.pre.219, on macOS 14.6.1 23G93 darwin-arm64, locale en) • Flutter version 3.25.0-1.0.pre.219 on channel main at /Users/dacoharkes/flt/engine/flutter • Upstream repository git@github.com:flutter/flutter • Framework revision 6fe09872b1 (3 days ago), 2024-08-30 19:53:11 -0400 • Engine revision 2d56e44888 • Dart version 3.6.0 (build 3.6.0-198.0.dev) • DevTools version 2.39.0 [!] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /Users/dacoharkes/Library/Android/sdk • Platform android-34, build-tools 34.0.0 • ANDROID_HOME = /Users/dacoharkes/Library/Android/sdk • ANDROID_SDK_ROOT = /Users/dacoharkes/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jre/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866) ✗ Android license status unknown. Run `flutter doctor --android-licenses` to accept the SDK licenses. See https://flutter.dev/to/macos-android-setup for more details. [✓] Xcode - develop for iOS and macOS (Xcode 15.0.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 15A507 • CocoaPods version 1.15.2 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2021.3) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866) [✓] VS Code (version 1.92.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.94.0 [✓] Connected device (4 available) • Daco’s iPhone (mobile) • 00008110-001410883A78401E • ios • iOS 17.5.1 21F90 • macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64 • Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.113 [✓] Network resources • All expected network resources are available. ! Doctor found issues in 1 category. ``` ## Flutter Application Metadata No pubspec in working directory.
c: crash,tool,P2,team-tool,triaged-tool
low
Critical
2,501,233,696
pytorch
Pytorch XPU Windows build failed in cmake rerun loop due to the source code deep path
### 🐛 Describe the bug When we follow below steps to build XPU pytorch wheel on Windows platform, if the pytorch source code path too long (>52 in total), the cmake will re-run 100 times and failed in the end. Steps: 1. install MSVC 2022 2. install xpu support package by following https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html 3. Build ``` set VS2022INSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Community "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" set USE_KINETO=0 python setup.py bdist_wheel ``` With Ninja ``` ninja: error: manifest 'build.ninja' still dirty after 100 tries, perhaps system time is not set ``` Cmake only ``` CMake is re-running because generate.stamp is out-of-date. Check the system time… ``` ### Versions ``` Collecting environment information... PyTorch version: 2.5.0a0+git3f3774a Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Enterprise GCC version: Could not collect Clang version: Could not collect CMake version: version 3.24.1 Libc version: N/A Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:29:51) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=2400 DeviceID=CPU0 Family=207 L2CacheSize=14336 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2400 Name=12th Gen Intel(R) Core(TM) i9-12900 ProcessorType=3 Revision= Versions of relevant libraries: [pip3] numpy==2.1.0 [pip3] optree==0.12.1 [pip3] torch==2.5.0a0+git3f3774a [conda] mkl-include 2024.2.1 pypi_0 pypi [conda] mkl-static 2024.2.1 pypi_0 pypi [conda] numpy 2.1.0 pypi_0 pypi [conda] optree 0.12.1 pypi_0 pypi [conda] torch 2.5.0a0+git3f3774a pypi_0 pypi ``` cc @gujinghui @EikanWang @fengyuan14 @guangyey
triaged,module: xpu
low
Critical
2,501,234,531
godot
`Area3D`'s reverb bus makes `AudioStreamPlayer3D` not output to its own bus.
### Tested versions - Reproducible in: every version since 4.0.stable ### System information Godot v4.4.dev (88197d4a5) - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 (NVIDIA; 31.0.15.4617) - AMD Ryzen 5 3600 6-Core Processor (12 Threads) ### Issue description When an `AudioStreamPlayer3D` is inside an `Area3D` that has its reverb bus enabled, the correct amount of the audio is sent to the bus set as the reverb bus, but the player stops sending any signal to its own bus. Example: If we have a single `AudioStreamPlayer3D` in a scene set to the "Dry" bus, and it's inside an `Area3D` that has its reverb bus set to the "Wet" bus, the "Dry" bus will become silent. ![image](https://github.com/user-attachments/assets/3f993963-0e05-4610-817b-6104f8522833) *Silent "Dry" bus when inside Area3D with "Wet" reverb bus.* https://github.com/user-attachments/assets/1ec0a6f2-b101-4927-b1da-12283e6ed927 From what I understand, according to [these docs](https://docs.godotengine.org/en/latest/tutorials/audio/audio_streams.html#reverb-buses), this was meant to still send the signal to the player's bus with the purpose of it being the "dry" signal and the reverb bus being the "wet" signal. If i misunderstood this and what i described in this issue is the expected behavior, feel free to close the issue. ### Steps to reproduce - Add an `AudioStreamPlayer3D` node to the scene and assign it bus A - Add an `Area3D` node to the scene, enable the reverb bus, and set it to bus B - Make sure the `AudioStreamPlayer3D` is physically inside the `Area3D` - Assign any AudioStream to the `AudioStreamPlayer3D` and start playing it. Same happens on editor and in-game. ### Minimal reproduction project (MRP) [area3d_reverb.zip](https://github.com/user-attachments/files/16839375/area3d_reverb.zip)
needs testing,topic:audio
low
Minor
2,501,241,023
pytorch
DISABLED test_random_no_reused_random_states_float64 (__main__.TestCuda)
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_random_no_reused_random_states_float64&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29554997762). Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_random_no_reused_random_states_float64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1047, in test_random_no_reused_random_states run(func, torch.device("cuda"), dtype) File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1042, in run return torch.stack([t1, t2]).unique().shape[0] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 991, in unique return torch.unique( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_jit_internal.py", line 624, in fn return if_false(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_jit_internal.py", line 624, in fn return if_false(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/functional.py", line 1075, in _return_output output, _, _ = _unique_impl(input, sorted, return_inverse, return_counts, dim) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/functional.py", line 968, in _unique_impl output, inverse_indices, counts = torch._unique2( RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) To execute this test, run the following from the base repo dir: PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_cuda.py TestCuda.test_random_no_reused_random_states_float64 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_cuda.py` cc @ptrblck @msaroufim @clee2000
module: cuda,triaged,module: flaky-tests,skipped
low
Critical
2,501,249,167
PowerToys
Mouse utilities: Make Mouse Pointer Crosshairs Use the Windows Accent Color
### Description of the new feature / enhancement Instead of choosing a crosshairs color, we'd like to make Mouse Pointer Crosshairs use the Windows accent color, please. ### Scenario when this would be used? Under Mouse Pointer Crosshairs Appearance & behavior, a user can select two options: 1) Use Windows Accent Color; and 2) Use Custom Color. ### Supporting information If Use Windows Accent Color is selected, it will have the same color as the Windows accent color, which is used on taskbar, action center, and title-bars.
Idea-Enhancement,Needs-Triage,Product-Mouse Utilities
low
Minor
2,501,252,725
godot
Gizmo `get_material` editor crash: `ERROR: Parameter "data.tree" is null.`
### Tested versions Reproducible in: - v4.4.dev.custom_build [88197d4a5] - v4.3.stable.official [77dcf97d8] - v4.0.stable.official [92bee43ad] ### System information Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Mobile) - dedicated NVIDIA GeForce GTX 980 Ti (NVIDIA; 31.0.15.3598) - 13th Gen Intel(R) Core(TM) i7-13700K (24 Threads) ### Issue description Calling `get_material` on a gizmo's `_redraw` can cause a crash if the call to `_redraw` happens at a specific time. It crashes with the error message: ``` ERROR: Parameter "data.tree" is null. at: Node::get_tree (C:\Personal\Godot\godot_src\scene/main/node.h:471) ================================================================ CrashHandlerException: Program crashed Engine version: Godot Engine v4.4.dev.custom_build (88197d4a513a7873cb7cf85138a42a04fb1c9011) Dumping the backtrace. Please include this when reporting the bug to the project developer. [0] SceneTree::get_edited_scene_root (C:\Personal\Godot\godot_src\scene\main\scene_tree.cpp:1449) [1] EditorNode3DGizmo::is_editable (C:\Personal\Godot\godot_src\editor\plugins\node_3d_editor_gizmos.cpp:45) [2] EditorNode3DGizmoPlugin::get_material (C:\Personal\Godot\godot_src\editor\plugins\node_3d_editor_gizmos.cpp:1016) [3] call_with_validated_variant_args_ret_helper<EditorNode3DGizmoPlugin,Ref<StandardMaterial3D>,String const &,Ref<EditorNode3DGizmo> const &,0,1> (C:\Personal\Godot\godot_src\core\variant\binder_common.h:375) [4] call_with_validated_object_instance_args_ret<EditorNode3DGizmoPlugin,Ref<StandardMaterial3D>,String const &,Ref<EditorNode3DGizmo> const &> (C:\Personal\Godot\godot_src\core\variant\binder_common.h:663) [5] MethodBindTR<EditorNode3DGizmoPlugin,Ref<StandardMaterial3D>,String const &,Ref<EditorNode3DGizmo> const &>::validated_call (C:\Personal\Godot\godot_src\core\object\method_bind.h:538) [6] GDScriptFunction::call (C:\Personal\Godot\godot_src\modules\gdscript\gdscript_vm.cpp:2108) [7] GDScriptInstance::callp (C:\Personal\Godot\godot_src\modules\gdscript\gdscript.cpp:2035) [8] Object::callp (C:\Personal\Godot\godot_src\core\object\object.cpp:789) [9] Variant::callp (C:\Personal\Godot\godot_src\core\variant\variant_call.cpp:1211) [10] GDScriptFunction::call (C:\Personal\Godot\godot_src\modules\gdscript\gdscript_vm.cpp:1783) [11] GDScriptInstance::callp (C:\Personal\Godot\godot_src\modules\gdscript\gdscript.cpp:2035) [12] Node::_gdvirtual__process_call<0> (C:\Personal\Godot\godot_src\scene\main\node.h:376) [13] Node::_notification (C:\Personal\Godot\godot_src\scene\main\node.cpp:56) [14] Node::_notificationv (C:\Personal\Godot\godot_src\scene\main\node.h:50) [15] Node3D::_notificationv (C:\Personal\Godot\godot_src\scene\3d\node_3d.h:52) [16] Object::notification (C:\Personal\Godot\godot_src\core\object\object.cpp:876) [17] SceneTree::_process_group (C:\Personal\Godot\godot_src\scene\main\scene_tree.cpp:1023) [18] SceneTree::_process (C:\Personal\Godot\godot_src\scene\main\scene_tree.cpp:1100) [19] SceneTree::process (C:\Personal\Godot\godot_src\scene\main\scene_tree.cpp:582) [20] Main::iteration (C:\Personal\Godot\godot_src\main\main.cpp:4329) [21] OS_Windows::run (C:\Personal\Godot\godot_src\platform\windows\os_windows.cpp:1781) [22] widechar_main (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:181) [23] _main (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:206) [24] main (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:220) [25] WinMain (C:\Personal\Godot\godot_src\platform\windows\godot_windows.cpp:234) [26] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288) [27] <couldn't map PC to fn name> -- END OF BACKTRACE -- ================================================================ ``` ### Steps to reproduce 1. Open MRP 2. Add a `Node3D` to the scene 3. Delete the newly added `Node3D` ### Minimal reproduction project (MRP) [Gizmo-crash.zip](https://github.com/user-attachments/files/16839457/Gizmo-crash.zip)
bug,topic:editor,crash
low
Critical
2,501,252,744
godot
Instant freeze when using the vulkan forward+ renderer
### Tested versions v4.3.stable.official.77dcf97d8 Appeared when upgrading to 4.3 ### System information Fedora release 40 (Forty) - Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org Vulkan 1.3.278 - Forward+ - Using Device #0: Intel - Intel(R) HD Graphics 5500 (BDW GT2) ### Issue description Godot freezes immediatly when editing or running a project using the forward+ vulkan renderer. The project selection menu works. Due to the nature of the freeze, I think it may be related to the vulkan swapchain somehow. The driver I'm using has tripped up some of my projects in the past because it doesn't accept double buffering: ``` 143- ------------------------- 144- minImageCount = 3 145: maxImageCount = 0 -- 175- PRESENT_MODE_IMMEDIATE_KHR: 176- minImageCount = 4 177: maxImageCount = 0 -- 198- PRESENT_MODE_MAILBOX_KHR: 199- minImageCount = 4 200: maxImageCount = 0 -- 221- PRESENT_MODE_FIFO_KHR: 222- minImageCount = 3 223: maxImageCount = 0 -- 244- PRESENT_MODE_FIFO_RELAXED_KHR: 245- minImageCount = 3 246: maxImageCount = 0 -- ``` (info extracted from vkinfo) I tried looking into presenting / swapchain creation code to see if I could find anything, but no luck so far. ### Steps to reproduce Edit, create & edit, or run a project using the forward+ renderer. ### Minimal reproduction project (MRP) N/A
bug,topic:rendering,needs testing
low
Minor
2,501,289,432
flutter
Autocomplete field options in the SingleChildScrollView remains open even when field goes out of view.
### Steps to reproduce 1. Wrap Autocomplete field in the SingleChildScrollView. 2. Click on the field as if you are selecting options. 3. Scroll to make field out of view. Remember to scroll by two fingers (not by clicking and dragging the scrollbar). 4. You will notice the options overlay remains open and follow the field which is not even visible. ### Expected results Options should close as soon as field goes out of view. ### Actual results It remains open. ### Code sample https://dartpad.dev/?id=61735dfd0bbd1de1211510d4cdab387d ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> ![image](https://github.com/user-attachments/assets/0e2cb92b-984f-4d2a-824f-2976852295af) </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console Dart SDK 3.5.1 and Flutter SDK 3.24.1 ``` </details>
framework,f: material design,platform-web,a: desktop,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.25
low
Major
2,501,311,391
godot
Certain GLTF imports have broken normals
_Originally posted by @rafrafek in https://github.com/godotengine/godot/issues/85406#issuecomment-2325031951_ I tried to reproduce it in an MRP and now instead of having dark strips in becomes entirely darker when rotating: https://github.com/rafrafek/normals-demo ![image](https://github.com/user-attachments/assets/47fa3485-a64f-44c5-9954-65df8f9c9dd7)
bug,topic:import
low
Critical
2,501,334,850
godot
Parenting animated nodes to a _vehicle node causes glTF animation import to fail
### Tested versions - Reproduceable in v4.3.stable.steam [77dcf97d8] and v4.2.2.stable.official [15073afe3] ### System information Windows 11 Home, Mobile, RTX 3080 Ti, 11th Gen Intel i7-11700k ### Issue description When creating 3D vehicles using 3D modeling and animation software, it is possible to create vehicles using the `_vehicle` and `_wheel` node suffixes to designate specific nodes as either vehicles or wheels, respectively. However, if an animated node of any kind is parented to or a descendant of a `_vehicle` tagged node, it causes the animation to fail to import properly. In the case of 4.2.2, the animations do not import at all. In 4.3, animations will import, but are obviously using the incorrect node paths during the import. This causes Output to be spammed with errors and warnings similar to the following: ``` <--- Error Sample ---> Node not found: "Node/root/body_vehicle/left_tread_12" (relative to "blockbench_export"). Node not found: "Node/root/body_vehicle/left_tread_12" (relative to "blockbench_export"). Node not found: "Node/root/body_vehicle/left_tread_22" (relative to "blockbench_export"). Node not found: "Node/root/body_vehicle/left_tread_22" (relative to "blockbench_export"). Node not found: "Node/root/body_vehicle/left_tread_32" (relative to "blockbench_export"). Node not found: "Node/root/body_vehicle/left_tread_32" (relative to "blockbench_export"). <--- Warning Sample ---> scene/animation/animation_mixer.cpp:668 - AnimationMixer (at: tiny-tank.gltf): 'forward', couldn't resolve track: 'Node/root/body_vehicle/left_tread_12'. This warning can be disabled in Project Settings. (User) scene/animation/animation_mixer.cpp:668 - AnimationMixer (at: tiny-tank.gltf): 'forward', couldn't resolve track: 'Node/root/body_vehicle/left_tread_12'. This warning can be disabled in Project Settings. (User) scene/animation/animation_mixer.cpp:668 - AnimationMixer (at: tiny-tank.gltf): 'forward', couldn't resolve track: 'Node/root/body_vehicle/left_tread_22'. This warning can be disabled in Project Settings. (User) scene/animation/animation_mixer.cpp:668 - AnimationMixer (at: tiny-tank.gltf): 'forward', couldn't resolve track: 'Node/root/body_vehicle/left_tread_22'. This warning can be disabled in Project Settings. (User) scene/animation/animation_mixer.cpp:668 - AnimationMixer (at: tiny-tank.gltf): 'forward', couldn't resolve track: 'Node/root/body_vehicle/left_tread_32'. This warning can be disabled in Project Settings. (User) scene/animation/animation_mixer.cpp:668 - AnimationMixer (at: tiny-tank.gltf): 'forward', couldn't resolve track: 'Node/root/body_vehicle/left_tread_32'. This warning can be disabled in Project Settings. (User) ``` ![image](https://github.com/user-attachments/assets/90e4882d-5cd5-426f-9c11-d58c157efb39) It appears that the importer, when trying to import animations, is unaware that the `_vehicle` node name has changed from `*_vehicle` to `*` and is trying to associate the incoming animations with a node path that does not exist in the model anymore. In this case, I would expect Godot would be aware that the `_vehicle` node has been renamed from (in my case) `body_vehicle` to `body` and to properly associate the animated nodes to their proper parent `body` instead of `body_vehicle`. ### Steps to reproduce 1. Create a 3D vehicle model in the animation software of your choice. I used blockbench to create the model used in the MRP. 2. Tag the vehicle body by appending the `_vehicle` suffix to the node name. 3. Tag the vehicle wheels by appending the `_wheel` suffix to the corresponding node names. 4. Animate an element parented to the vehicle body (or to one of its descendants). 5. Export the model with animations to the glTF format. 6. Import the model into Godot engine. 7. Observe console errors and warnings as described in issue description. Observe that animations are missing (4.2.2) or broken (4.3.x). ### Minimal reproduction project (MRP) [vehicle_import_test.zip](https://github.com/user-attachments/files/16839968/vehicle_import_test.zip)
bug,topic:import,topic:3d
low
Critical
2,501,352,672
rust
Confusing suggestion to use `cargo:rustc-link-lib` in a cargo project with a lib and bins
### Problem -- I created this repo to showcase/reproduce the error: https://github.com/kusnezoff-alexander/cargo-build-linking-bug -- The success of linking C-libraries during `cargo build` seems to depend on the existence of `lib.rs` even if only the binary is built. Building packages which offer both a library and binary could suffer from this potential bug. This bug has been only been reproduced so far for linking against a library via ```rust println!("cargo:rustc-link-search=../libs"); println!("cargo:rustc-link-lib=static=mylib"); ``` inside `build.rs`. **Exptected behavior**: The existence of a file called `lib.rs` should have no effect on the linking during `cargo build`: **Observed behavior**: 1. Without the existence of `lib.rs`: Linking to custom C-library does work (see `./linking-without-lib-works` for working example) 2. Withthe existence of `lib.rs`: Linking to custom C-library does not work (see `./linking-with-lib-doesnt` for working example) ### Steps 1. `git clone https://github.com/kusnezoff-alexander/cargo-build-linking-bug` 2. `cd libs && make all` 3. `cd linking-does-work && cargo build` - should exit successfully 4. `cd linking-doesnt-work && cargo build` - should throw an error that extern function-symbol isn't defined (although the only difference to the previous case is the existence of lib.rs) ### Possible Solution(s) _No response_ ### Notes - interestlingly though, it does work if `export RUSTFLAGS="-L <absolute-path-to-this-dir>/libs/ -l mylib"` is set ### Version ```text cargo 1.80.1 (376290515 2024-07-16) release: 1.80.1 commit-hash: 37629051518c3df9ac2c1744589362a02ecafa99 commit-date: 2024-07-16 host: x86_64-unknown-linux-gnu libgit2: 1.7.2 (sys:0.18.3 vendored) libcurl: 8.6.0-DEV (sys:0.4.72+curl-8.6.0 vendored ssl:OpenSSL/1.1.1w) ssl: OpenSSL 1.1.1w 11 Sep 2023 os: Fedora 40.0.0 [64-bit] ```
A-linkage,A-diagnostics,T-cargo,D-confusing,D-incorrect
low
Critical
2,501,379,755
yt-dlp
Generic site support: Australian Football League (AFL) websites
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting a new site support request - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required ### Region Worldwide ### Example URLs | URL | Content |:- |:- | AFLW | - | https://www.afl.com.au/aflw/video/all-video | Latest AFLW Videos | https://www.afl.com.au/aflw/podcasts/workplay | Workplay: Beyond the Game | https://www.afl.com.au/aflw/matches/match-videos?comp=2 | AFL Preseason § AFLW Match Videos | AFL | - | https://www.afl.com.au/video/all-video | Latest AFL Videos | https://www.afl.com.au/podcasts/afl-daily | AFL Daily | https://www.afl.com.au/matches/match-videos?comp=16 | Under 18 Boys National Championships § AFL Match Videos | clubs | - | https://www.afc.com.au/thecrowsshow | The Crows Show - Radio § Adelaide Crows Football Club | https://www.afc.com.au/crowsradioshow | The Crows Show - TV § Adelaide Crows Football Club | https://www.lions.com.au/video/all-video | Latest Videos of the Brisbane Lions Football Club | https://www.carltonfc.com.au/podcasts/summer-sessions | Summer Sessions § the Carlton Football Club ### Provide a description that is worded well enough to be understood Extended from #2833. These websites are built by Telstra [^1][^2] and they seem to be based on the same framework, so we may be able to support all of them. [^1]: Telstra Group Limited - https://en.wikipedia.org/wiki/Telstra [^2]: Telstra: Broadband Internet, NBN, 5G, TV & Mobile Phone Services - https://www.telstra.com.au/ <br> Single videos are downloadable by the `brightcove:new` embed extractor: ``` [generic] Extracting URL: https://www.lions.com.au/video/1644772/ah-chee-we-believe-our-footy-can-beat-the-best?videoId=164...shFrom=172... [generic] ah-chee-we-believe-our-footy-can-beat-the-best?videoId=1644772&modal=true&type=video&publishFrom=172...: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] ah-chee-we-believe-our-footy-can-beat-the-best?videoId=1644772&modal=true&type=video&publishFrom=172...: Extracting information [brightcove:new] 6361371134112: Checking possible brightcove video URL WARNING: [MediaStream] None: Failed to parse JSON: Expecting ',' delimiter in 'Ah Chee: "We believe': line 4 column 32 (char 114) [brightcove:new] Extracting URL: https://players.brightcove.net/6057984922001/pFcMhmjx5_default/index.html?videoId=6361371134112#_...D172...%22%7D [brightcove:new] 6361371134112: Downloading JSON metadata [brightcove:new] 6361371134112: Downloading m3u8 information [info] 6361371134112: Downloading 1 format(s): hls-6190 [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 52 [download] Destination: Ah Chee: "We believe our footy can beat the best" [6361371134112].mp4 ... ``` . ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', 'https://www.afl.com.au/aflw/video/all-video'] [debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] [debug] Lazy loading extractors is disabled [debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.0.13 30 Jan 2024) [debug] exe versions: ffmpeg n6.1.1-7-ga267d4ad4c-20240222 (setts), ffprobe n6.1.1-7-ga267d4ad4c-20240222 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.06.02, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.1, urllib3-2.2.1, websockets-13.0.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1830 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: stable@2024.08.06 from yt-dlp/yt-dlp yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp) [generic] Extracting URL: https://www.afl.com.au/aflw/video/all-video [generic] all-video: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] all-video: Extracting information [debug] Looking for embeds ERROR: Unsupported URL: https://www.afl.com.au/aflw/video/all-video Traceback (most recent call last): File ".../yt-dlp/yt_dlp/YoutubeDL.py", line 1626, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../yt-dlp/yt_dlp/YoutubeDL.py", line 1761, in __extract_info ie_result = ie.extract(url) ^^^^^^^^^^^^^^^ File ".../yt-dlp/yt_dlp/extractor/common.py", line 740, in extract ie_result = self._real_extract(url) ^^^^^^^^^^^^^^^^^^^^^^^ File ".../yt-dlp/yt_dlp/extractor/generic.py", line 2526, in _real_extract raise UnsupportedError(url) yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.afl.com.au/aflw/video/all-video ```
site-request
low
Critical
2,501,389,703
pytorch
Issue with Weight Synchronization When Using Consecutive DDP Instances in the Same torch.distributed Environment
### 🐛 Describe the bug Hello, I am working on a project where I need to use multiple consecutive instances of DistributedDataParallel (DDP) within the same torch.distributed environment. In my scenario, I initially train a model, to later create a second model by reusing a layer from the first one. However, I am observing that when using the second DDP instance for the second model, the weights are not synchronizing correctly across multiple GPUs. While the first model seems to work fine, the second model shows discrepancies in the weights between GPUs. Is it possible to have multiple consecutive DDP instances within the same torch.distributed environment? Or are there any special considerations I need to be aware of to ensure the weights are synchronized correctly? I would appreciate any guidance or recommendations on this issue. Thank you. ### Versions - cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
oncall: distributed,triaged
low
Critical
2,501,410,267
PowerToys
In windows 11 Power Toys, I want to change always on top's windows pane's opacity should be adjustable.
### Description of the new feature / enhancement I want that "always on topped" windowpane also should have the feature to adjust their opacity. Because it will be really helpful work together with just behind windows pane. ### Scenario when this would be used? For example: If I am watching educational video class and the same time I want to note or draw in OneNote then I will keep video window open in full screen mode and OneNote in "always on top" mode. Now requested feature will be very helpful in this situation. Because in adjusted opacity of always on topped windows pane allows us to see what things are happening in just behind of always on topped windows pane. ### Supporting information _No response_
Needs-Triage
low
Minor
2,501,422,040
godot
input_pickable does not open the right documentation page for StaticBody2D
### Tested versions - Reproduced in 4.3.Stable ### System information Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3090 (NVIDIA; 31.0.15.3623) - AMD Ryzen 9 5900X 12-Core Processor (24 Threads) ### Issue description when working with input_pickable, I attempted to check the documentation on the property for my StaticBody2D by ctrl+clicking on the property. This will bring you to the CollisionObject2D page which notes the default for input_pickable as true, which is correct for things like Area2D, but is not correct for nodes which inherit from PhysicsBody2D. This caused me some confusion before I realized that input_pickable was actually false. It would make more sense for input_pickable to open the PhysicsBody2D page for nodes which inherit PhysicsBody2D, or to add a note on the CollisionObject2D page regarding this override. ![image](https://github.com/user-attachments/assets/d8b5f041-3b17-4e10-b237-681a00934c1a) ![image](https://github.com/user-attachments/assets/46263b58-808f-4437-96bb-9d7e41417b87) ![image](https://github.com/user-attachments/assets/5af5f50a-40d9-4f20-8fb7-a41dd9b5abe7) ### Steps to reproduce - add a staticbody2D node - attach a script - type `input_pickable` - ctrl+click on `input_pickable` - observe issue ### Minimal reproduction project (MRP) [MRP.zip](https://github.com/user-attachments/files/16840427/MRP.zip)
enhancement,discussion,topic:editor
low
Major
2,501,482,624
godot
Shader breaks TileMapLayer Terrain connection
### Tested versions - Reproducible in Versions 4.3 and 4.2.2, both using a TileMap and a TileMapLayer. ### System information Windows 11 Godot 4.3 stable, compatibility mode ### Issue description As soon as the shader is doing anything, the terrain borders break. The tiles surrounded by each other seem to work correctly, but the connection tiles to transparency are way off, as you can see here: https://github.com/user-attachments/assets/c36ace9c-926c-4f7b-a350-4c61143d7642 Applying the shader material is working fine, as long as there is no actual code in the shader. I do call set_cells_terrain_connect by code, but it does look the same as in the editor. https://github.com/user-attachments/assets/1101235a-35c0-4f8f-963a-b987cae3a488 Same in the old version using a TileMap https://github.com/user-attachments/assets/22538627-8c06-48a2-ae22-98a1aae4a8d9 What's happening here? ### Steps to reproduce * Start a new project, compatibility mode * create 2D scene * add a TileMap or TileMapLayer * create a new TileSet * set up a simple terrain, connecting to transparency (Mode "match sides") * draw the terrain to the scene in any way so terrain borders are visible * create a new shader material for the TileMap(Layer)'s CanvasItem * create a new shader script * assign any value to COLOR in fragment() ### Minimal reproduction project (MRP) N/A
bug,topic:2d
low
Minor
2,501,504,059
Python
Create error handling function in linear regression algorithm of ML
### What would you like to share? We can add error handling function to the linear regressio algorithm in the ML folder; so that it can handle errors and still run the linear regression algorithm ### Additional information _No response_
awaiting triage
medium
Critical
2,501,505,905
rust
[rustdoc] re-add --disable-minification
the line to change would be in `write_shared.rs:154`: `cx.shared.fs.write(filename, f.minified())` this should use `f.bytes` instead of `f.minified()` if the `--no-minify` flag is specified. this flag would mostly be useful for debugging rustdoc itself, allowing js and css errors to accurately report line numbers.
T-rustdoc,C-enhancement
low
Critical
2,501,516,682
rust
[rustdoc search] kind filters should not be allowed in type-based search
queries like `bool -> fn:new` are nonsense and should not be accepted.
T-rustdoc,C-bug,A-type-based-search,A-rustdoc-search
low
Minor
2,501,518,016
ui
[bug]: Blocks color palette does not work. (no effect changing colors)
### Describe the bug The blocks color palette chooser does not change colors used by the blocks. See screenshot: ![image](https://github.com/user-attachments/assets/1a913bfb-03b7-41c6-bc0c-ebaa966cbdf5) ### Affected component/components website: Blocks ### How to reproduce Go to https://ui.shadcn.com/blocks Choose a color palette ### Codesandbox/StackBlitz link This is on the shadcn website ### Logs _No response_ ### System Info ```bash Irrelevant ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,501,519,454
material-ui
[Grid2] `size` and `gap` should work when direction is `column` or `column-reverse`
### Summary Since V6 Grid became deprecated and Grid2 recommended. Unfortunately it's not possible to convert from Grid to Grid2 since Grid2 doesn't support the direction property. So no column direction is allowed. Goal: Grid2 items can be also arranged in column direction. ### Examples Example with deprecated Grid and new Grid2. Both Grids should look the same but Grid2 can't handle height while direction is set to column: https://stackblitz.com/edit/stackblitz-starters-6zeyer?file=src%2FApp.tsx ### Motivation Refactor Code from Grid to Grid2 **Search keywords**: Grid2 Grid direction column
new feature,waiting for 👍,component: Grid
low
Major
2,501,523,713
pytorch
Add the pages of `torch.e` and `torch.pi` to PyTorch doc
### 📚 The doc issue I found `torch.e` and `torch.pi` can be used with PyTorch as shown below: ```python import torch torch.e # 2.718281828459045 torch.pi # 3.141592653589793 ``` But [PyTorch doc](https://pytorch.org/docs/stable/index.html) doesn't have thier pages to explain. So, thier pages should be created to explain. ### Suggest a potential alternative/fix _No response_ cc @svekars @brycebortree @tstatler @albanD
module: docs,triaged,actionable,module: python frontend
low
Minor
2,501,532,882
godot
ResourceLoader does not find resource when using binary .res format and supplying a type hint.
### Tested versions v4.3.stable.official [77dcf97d8] ### System information Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.5612) - AMD Ryzen 7 1700X Eight-Core Processor (16 Threads) ### Issue description When I supply a type hint AND use a binary format for the ResourceLoader, it can't find the resource. The returned error: ``` E 0:00:01:0649 game.gd:16 @ _ready(): Resource file not found: user://savefile.res (expected type: Savegame) <C++ Error> Condition "!file_check->file_exists(p_path)" is true. Returning: Ref<Resource>() <C++ Source> core/io/resource_loader.cpp:288 @ _load() <Stack Trace> game.gd:16 @ _ready() ``` Here is my code: ``` func _ready() -> void: # WORKS 1: # Filetype .tres # You can add a Type Hint for ResourceLoader or not, both works var my_savegame : Savegame = Savegame.new() ResourceSaver.save(my_savegame, "user://savefile.tres") my_savegame = ResourceLoader.load("user://savefile.tres", "Savegame") # Does NOT work! Error: _ready(): Resource file not found: user://savefile.res (expected type: Savegame) # Filetype .res (binary) # Type Hint for ResourceLoader (if you remove the type hint, it works!) ResourceSaver.save(my_savegame, "user://savefile.res") my_savegame = ResourceLoader.load("user://savefile.res", "Savegame") ``` ### Steps to reproduce 1. Open the Project 2. Press play 3. Look at the Error log 1. Then open game.gd 2. remove the type hint in line 16 3. Press play again 4. Now it will work, no error dropped ### Minimal reproduction project (MRP) [test_resourceloader_hint.zip](https://github.com/user-attachments/files/16840965/test_resourceloader_hint.zip)
bug,topic:core
low
Critical
2,501,535,307
rust
ICE: `deadlock detected as we're unable to find a query cycle to break`
<!-- Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for how to create smaller examples. http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/ --> ### Code rustc -Zthreads=50 --crate-type=lib,--cap-lints=warn,-Zmir-opt-level=5,-Zvalidate-mir,--edition=2021,-Zlint-mir,-Cdebuginfo=2,-Clink-dead-code=true,-Zthreads=16,-Zwrite-long-types-to-disk=no ```Rust type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.83.0-nightly (94885bc69 2024-09-01) binary: rustc commit-hash: 94885bc699512cfee8560e73c2a01ee6b4b76563 commit-date: 2024-09-01 host: x86_64-unknown-linux-gnu release: 1.83.0-nightly LLVM version: 19.1.0 ``` ### Error output ``` .... warning: unnecessary trailing semicolons --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:352 | 1 | ...;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^ help: remove these semicolons warning: unnecessary trailing semicolons --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:347 | 1 | ...{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^ help: remove these semicolons warning: unnecessary trailing semicolons --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:377 | 1 | ...;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^ help: remove these semicolons error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:268 | 1 | ... Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^-------------------------------------------------------------------------------------------------------------------------------------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:261 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:300 | 1 | ...Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^--------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:293 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:46 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^--------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:39 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:337 | 1 | ...= Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^------------------------------------------------------ help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:330 | 1 | ...;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:235 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^--------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:228 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:205 | 1 | ...{;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:198 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:371 | 1 | ...;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^--------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:364 | 1 | ...{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:172 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^--------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:165 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:79 | 1 | ...= Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::A... | ^^^----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:72 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:142 | 1 | ...Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >... | ^^^-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:135 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0107]: type alias takes 0 generic arguments but 1 generic argument was supplied --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:109 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^--------- help: remove the unnecessary generics | | | expected 0 generic arguments | note: type alias defined here, with 0 generic parameters --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:102 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^ error[E0391]: cycle detected when expanding type alias `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:371 | 1 | ...ype Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^^^^^^^^^^ | = note: ...which immediately requires expanding type alias `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` again = note: type aliases cannot be recursive = help: consider using a struct, enum, or union instead to break the cycle = help: see <https://doc.rust-lang.org/reference/types.html#recursive-types> for more information note: cycle used when checking that `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` is well-formed --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:359 | 1 | ...{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error[E0391]: cycle detected when expanding type alias `KooArc::{constant#0}::Frc` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:79 | 1 | ...= Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::A... | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: ...which immediately requires expanding type alias `KooArc::{constant#0}::Frc` again = note: type aliases cannot be recursive = help: consider using a struct, enum, or union instead to break the cycle = help: see <https://doc.rust-lang.org/reference/types.html#recursive-types> for more information note: cycle used when checking that `KooArc::{constant#0}::Frc` is well-formed --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:67 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error[E0391]: cycle detected when expanding type alias `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:205 | 1 | ...;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: ...which immediately requires expanding type alias `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` again = note: type aliases cannot be recursive = help: consider using a struct, enum, or union instead to break the cycle = help: see <https://doc.rust-lang.org/reference/types.html#recursive-types> for more information note: cycle used when checking that `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` is well-formed --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:193 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error[E0391]: cycle detected when expanding type alias `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:268 | 1 | ...::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;};;;}type Frc = Frc< {;{{{;;;};;;};;;}type Frc = Frc< {;;;} >::Arc ;;} >::Arc ;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ;;;} >::Arc ; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: ...which immediately requires expanding type alias `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` again = note: type aliases cannot be recursive = help: consider using a struct, enum, or union instead to break the cycle = help: see <https://doc.rust-lang.org/reference/types.html#recursive-types> for more information note: cycle used when checking that `KooArc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc::{constant#0}::Frc` is well-formed --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:256 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error[E0391]: cycle detected when expanding type alias `KooArc::{constant#0}::Frc` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/active/corpus/14/14be07dbc4aa624f2dfd58b763fb7a3dd12b3748.rs:1:46 | 1 | type KooArc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {{;{{;;;};;;}type Frc = Frc< {;;;} >::Arc ;;}type Frc = Frc< {;{{;{{;;;};;;}type Frc = Frc< {;;;}... | ^^^^^^^^^^^^ | = note: ...which immediately requires expanding type alias `KooArc::{constant#0}::Frc` again ```
I-ICE,T-compiler,C-bug,E-needs-mcve,WG-compiler-parallel
low
Critical
2,501,537,924
rust
deadlock detected as we're unable to find a query cycle to break
<!-- Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for how to create smaller examples. http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/ --> ### Code rustc -Zdump-mir-dir=/tmp/icemaker_global_tempdir.Yy4k6YWiomMm/rustc_testrunner_tmpdir.tq4vky7rztOO,-o/tmp/icemaker_global_tempdir.Yy4k6YWiomMm/rustc_testrunner_tmpdir_discover.Hwye5zAhebke/outputfile,--cap-lints=warn,-Zmir-opt-level=5,-Zvalidate-mir,--edition=2021,-Zlint-mir,-Cdebuginfo=2,-Clink-dead-code=true,-Zthreads=16,-Zwrite-long-types-to-disk=no ```Rust // Test that impl trait does not allow creating recursive types that are // otherwise forbidden. #![feature(generators)] #![allow(unconditional_recursion)] fn option(i: i32) -> impl Sync { //~^ ERROR cannot resolve opaque type if generator_sig() < 0 { None } else { Sized((option(i - Sized), i)) } } fn tuple() -> impl Sized { //~^ ERROR (tuple(),) } fn array() -> _ { //~^ ERROR [array()] } fn ptr() -> _ { //~^ ERROR &ptr() as *const impl Sized } fn fn_ptr() -> impl Sized { //~^ ERROR fn_ptr as fn() -> _ } fn closure_capture() -> impl Sized { //~^ ERROR let x = closure_capture(); move || { x; } } fn closure_ref_capture() -> impl Sized { //~^ ERROR let x = closure_ref_capture(); move || { &x; } } fn closure_sig() -> _ { //~^ ERROR || closure_sig() } fn generator_sig() -> impl Sized { //~^ ERROR || i } fn generator_capture() -> impl i32 { //~^ ERROR let x = 1(); move || { yield; x; } } fn substs_change<T: 'static>() -> impl Sized { //~^ ERROR (substs_change::<&T>(),) } fn generator_hold() -> impl generator_capture { //~^ ERROR move || { let x = (); yield; x virtual ; } } fn use_fn_ptr() -> impl Sized { // OK, error already reported fn_ptr() } fn mutual_recursion() -> impl Sync { //~^ ERROR mutual_recursion_b() } fn mutual_recursion_b() -> impl Sized { //~^ ERROR mutual_recursion() } fn main() {} ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.83.0-nightly (94885bc69 2024-09-01) binary: rustc commit-hash: 94885bc699512cfee8560e73c2a01ee6b4b76563 commit-date: 2024-09-01 host: x86_64-unknown-linux-gnu release: 1.83.0-nightly LLVM version: 19.1.0 ``` ### Error output ``` error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found reserved keyword `virtual` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:77:11 | 77 | x virtual ; | ^^^^^^^ expected one of 8 possible tokens error[E0557]: feature has been removed --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:4:12 | 4 | #![feature(generators)] | ^^^^^^^^^^ feature has been removed | = note: renamed to `coroutines` error[E0423]: expected value, found trait `Sized` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:9:62 | 9 | if generator_sig() < 0 { None } else { Sized((option(i - Sized), i)) } | ^^^^^ not a value error[E0425]: cannot find value `i` in this scope --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:55:8 | 55 | || i | ^ not found in this scope error[E0404]: expected trait, found builtin type `i32` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:58:32 | 58 | fn generator_capture() -> impl i32 { | ^^^ not a trait error[E0404]: expected trait, found function `generator_capture` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:72:29 | 72 | fn generator_hold() -> impl generator_capture { | ^^^^^^^^^^^^^^^^^ not a trait error[E0658]: yield syntax is experimental --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:62:9 | 62 | yield; | ^^^^^ | = note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information = help: add `#![feature(coroutines)]` to the crate attributes to enable = note: this compiler was built on 2024-09-02; consider upgrading it if it is out of date error[E0658]: yield syntax is experimental --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:76:9 | 76 | yield; | ^^^^^ | = note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information = help: add `#![feature(coroutines)]` to the crate attributes to enable = note: this compiler was built on 2024-09-02; consider upgrading it if it is out of date error[E0562]: `impl Trait` is not allowed in cast expression types --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:24:22 | 24 | &ptr() as *const impl Sized | ^^^^^^^^^^ | = note: `impl Trait` is only allowed in arguments and return types of functions and methods error[E0658]: yield syntax is experimental --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:62:9 | 62 | yield; | ^^^^^ | = note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information = help: add `#![feature(coroutines)]` to the crate attributes to enable = note: this compiler was built on 2024-09-02; consider upgrading it if it is out of date error: `yield` can only be used in `#[coroutine]` closures, or `gen` blocks --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:62:9 | 62 | yield; | ^^^^^ | help: use `#[coroutine]` to make this closure a coroutine | 61 | #[coroutine] move || { | ++++++++++++ error[E0658]: yield syntax is experimental --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:76:9 | 76 | yield; | ^^^^^ | = note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information = help: add `#![feature(coroutines)]` to the crate attributes to enable = note: this compiler was built on 2024-09-02; consider upgrading it if it is out of date error: `yield` can only be used in `#[coroutine]` closures, or `gen` blocks --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:76:9 | 76 | yield; | ^^^^^ | help: use `#[coroutine]` to make this closure a coroutine | 74 | #[coroutine] move || { | ++++++++++++ error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:27:16 | 27 | fn fn_ptr() -> impl Sized { | ^^^^^^^^^^ recursive opaque type 28 | //~^ ERROR 29 | fn_ptr as fn() -> _ | ------------------- returning here with type `fn() -> impl Sized` error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:12:15 | 12 | fn tuple() -> impl Sized { | ^^^^^^^^^^ recursive opaque type 13 | //~^ ERROR 14 | (tuple(),) | ---------- returning here with type `(impl Sized,)` error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:40:29 | 40 | fn closure_ref_capture() -> impl Sized { | ^^^^^^^^^^ recursive opaque type ... 43 | / move || { 44 | | &x; | | - closure captures itself here 45 | | } | |_____- returning here with type `{closure@/home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:43:5: 43:12}` error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:67:35 | 67 | fn substs_change<T: 'static>() -> impl Sized { | ^^^^^^^^^^ recursive opaque type 68 | //~^ ERROR 69 | (substs_change::<&T>(),) | ------------------------ returning here with type `(impl Sized,)` error[E0369]: binary operation `<` cannot be applied to type `impl Sized` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:9:24 | 9 | if generator_sig() < 0 { None } else { Sized((option(i - Sized), i)) } | --------------- ^ - {integer} | | | impl Sized error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:32:25 | 32 | fn closure_capture() -> impl Sized { | ^^^^^^^^^^ recursive opaque type ... 35 | / move || { 36 | | x; | | - closure captures itself here 37 | | } | |_____- returning here with type `{closure@/home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:35:5: 35:12}` error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:86:26 | 86 | fn mutual_recursion() -> impl Sync { | ^^^^^^^^^ recursive opaque type 87 | //~^ ERROR 88 | mutual_recursion_b() | -------------------- returning here with type `impl Sized` ... 91 | fn mutual_recursion_b() -> impl Sized { | ---------- returning this opaque type `impl Sized` error[E0423]: expected function, tuple struct or tuple variant, found trait `Sized` --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:9:44 | 9 | if generator_sig() < 0 { None } else { Sized((option(i - Sized), i)) } | ^^^^^ not a function, tuple struct or tuple variant error[E0720]: cannot resolve opaque type --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:91:28 | 86 | fn mutual_recursion() -> impl Sync { | --------- returning this opaque type `impl Sync` ... 91 | fn mutual_recursion_b() -> impl Sized { | ^^^^^^^^^^ recursive opaque type 92 | //~^ ERROR 93 | mutual_recursion() | ------------------ returning here with type `impl Sync` error[E0121]: the placeholder `_` is not allowed within types on item signatures for return types --> /home/matthias/vcs/github/fuzz_input/fuzzcorpus/inactive/purgatory-x-impl_type_alias_104551/33b1504231498ba3eb4e991aa6e484d728c66cd5.rs:48:21 | 48 | fn closure_sig() -> _ { | ^ not allowed in type signatures ```
I-ICE,T-compiler,C-bug,E-needs-mcve,F-coroutines,WG-compiler-parallel,I-cycle
low
Critical
2,501,608,170
pytorch
DISABLED test_serialization_array_with_empty (__main__.TestCuda)
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialization_array_with_empty&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29576190599). Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 21 failures and 7 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_serialization_array_with_empty` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1232, in not_close_error_metas pair.compare() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 711, in compare self._compare_values(actual, expected) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 841, in _compare_values compare_fn( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1023, in _compare_regular_values_close if torch.all(matches): RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_cuda.py", line 608, in test_serialization_array_with_empty self.assertEqual(copy, original) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3846, in assertEqual error_metas = not_close_error_metas( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1239, in not_close_error_metas f"Comparing\n\n" File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 378, in __repr__ body = [ File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 379, in <listcomp> f" {name}={value!s}," File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 514, in __repr__ return torch._tensor_str._str(self, tensor_contents=tensor_contents) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 708, in _str return _str_intern(self, tensor_contents=tensor_contents) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 625, in _str_intern tensor_str = _tensor_str(self, indent) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 357, in _tensor_str formatter = _Formatter(get_summarized_data(self) if summarize else self) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 145, in __init__ nonzero_finite_vals = torch.masked_select( RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) To execute this test, run the following from the base repo dir: python test/test_cuda.py TestCuda.test_serialization_array_with_empty This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_cuda.py` cc @ptrblck @msaroufim @clee2000
module: cuda,triaged,module: flaky-tests,skipped
low
Critical
2,501,611,233
ui
[bug]: Tailwind config parsing fails when encountering variable fonts and Tailwind Typography
### Describe the bug I have this Tailwind config: ```ts import animate from 'tailwindcss-animate'; import typography from '@tailwindcss/typography'; import defaultTheme from 'tailwindcss/defaultTheme'; import type { Config } from 'tailwindcss'; const config: Config = { darkMode: ['class'], content: [ './pages/**/*.{ts,tsx}', './components/**/*.{ts,tsx}', './app/**/*.{ts,tsx}', './providers/**/*.{ts,tsx}', ], theme: { extend: { fontFamily: { sans: ['var(--font-geist-sans)', ...defaultTheme.fontFamily.sans], mono: ['var(--font-geist-mono)', ...defaultTheme.fontFamily.mono], }, typography: (theme: (path: string) => string) => ({ DEFAULT: { css: { ':first-child': { marginTop: theme('margin.0'), }, 'h1, h2, h3, h4, h5, h6': { fontWeight: theme('fontWeight.semibold'), letterSpacing: theme('letterSpacing.tight'), marginBottom: theme('margin.4'), '+ h1, + h2, + h3, + h4, + h5, + h6': { marginTop: theme('margin.0'), }, }, h1: { fontSize: theme('fontSize.3xl'), marginTop: theme('margin.16'), }, h2: { fontSize: theme('fontSize.2xl'), }, h3: { fontSize: theme('fontSize.xl'), }, h4: { fontSize: theme('fontSize.lg'), }, h5: { fontSize: theme('fontSize.base'), }, h6: { fontSize: theme('fontSize.base'), }, table: { boxShadow: `0 0 0 1px ${theme('colors.gray.200')}`, borderRadius: theme('borderRadius.md'), overflow: 'hidden', // eslint-disable-next-line id-length p: { margin: 0, }, th: { paddingTop: '0.5714286em', paddingRight: '0.5714286em', paddingBottom: '0.5714286em', paddingLeft: '0.5714286em', backgroundColor: theme('colors.gray.100'), '&:not(:last-child)': { borderRightWidth: '1px', borderRightColor: theme('colors.gray.200'), }, }, 'tbody td, tfoot td': { paddingLeft: '0.5714286em', '&:not(:last-child)': { borderRightWidth: '1px', borderRightColor: theme('colors.gray.200'), }, }, }, code: { '&::before, &::after': { display: 'none', }, }, pre: { backgroundColor: 'transparent', borderWidth: 1, borderColor: theme('colors.gray.200'), }, }, }, invert: { css: { table: { boxShadow: `0 0 0 1px ${theme('colors.gray.700')}`, th: { backgroundColor: theme('colors.gray.800'), '&:not(:last-child)': { borderRightColor: theme('colors.gray.700'), }, }, 'tbody td, tfoot td': { '&:not(:last-child)': { borderRightColor: theme('colors.gray.700'), }, }, }, pre: { borderColor: theme('colors.gray.800'), }, }, }, }), }, }, plugins: [animate, typography], }; export default config; ``` and `components.json`: ```json { "$schema": "https://ui.shadcn.com/schema.json", "style": "new-york", "rsc": true, "tsx": true, "tailwind": { "config": "./tailwind.config.ts", "css": "./globals.css", "baseColor": "neutral", "cssVariables": true, "prefix": "" }, "aliases": { "components": "@repo/shadcn-ui/components", "utils": "@repo/shadcn-ui/lib/utils", "ui": "@repo/shadcn-ui/components/ui", "lib": "@repo/shadcn-ui/lib", "hooks": "@repo/shadcn-ui/hooks" } } ``` Running `npx shadcn add --all` fails. See log below. ### Affected component/components CLI ### How to reproduce 1. Create the Tailwind config mentioned 2. Create the `components.json` mentioned 3. Run `npx shadcn add --all` ### Codesandbox/StackBlitz link _No response_ ### Logs ```bash npx shadcn add --all ✔ Checking registry. ⠋ Updating ./tailwind.config.ts Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. Manipulation error: A syntax error was inserted. ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:13 - error TS1137: Expression or comma expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:17 - error TS1138: Parameter declaration expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:19 - error TS1005: ',' expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~~~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:23 - error TS1005: ',' expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:34 - error TS1005: ',' expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:35 - error TS1136: Property assignment expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:68 - error TS1005: ',' expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:25:69 - error TS1136: Property assignment expected. 25 sans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans], ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:13 - error TS1137: Expression or comma expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:17 - error TS1138: Parameter declaration expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:19 - error TS1005: ',' expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~~~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:23 - error TS1005: ',' expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:34 - error TS1005: ',' expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:35 - error TS1134: Variable declaration expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:37 - error TS1134: Variable declaration expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~~~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:52 - error TS1005: ',' expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:63 - error TS1005: ',' expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:26:68 - error TS1005: ',' expected. 26 mono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono] ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:27:5 - error TS1128: Declaration or statement expected. 27 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:27:6 - error TS1128: Declaration or statement expected. 27 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:29:22 - error TS1005: ';' expected. 29 'accordion-down': { ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:32:8 - error TS1128: Declaration or statement expected. 32 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:36:7 - error TS1128: Declaration or statement expected. 36 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:37:20 - error TS1005: ';' expected. 37 'accordion-up': { ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:40:8 - error TS1128: Declaration or statement expected. 40 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:45:6 - error TS1128: Declaration or statement expected. 45 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:47:22 - error TS1005: ';' expected. 47 'accordion-down': 'accordion-down 0.2s ease-out', ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:48:20 - error TS1005: ';' expected. 48 'accordion-up': 'accordion-up 0.2s ease-out' ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:49:6 - error TS1128: Declaration or statement expected. 49 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:51:4 - error TS1128: Declaration or statement expected. 51 } ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:52:3 - error TS1128: Declaration or statement expected. 52 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:52:4 - error TS1128: Declaration or statement expected. 52 }, ~ ../../../../../../var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts:54:1 - error TS1109: Expression expected. 54 }; ~ Error replacing tree: The children of the old and new trees were expected to have the same count (7:12). -- Details -- Path: /var/folders/f6/74w1n7r52879g59_6xch69ph0000gn/T/shadcn-yhS9D6euUc1s/shadcn-tailwind.config.ts Text: "...,\n './components/**/*.{ts,tsx}',\n './app/**/*.{ts,tsx}',\n './providers/**/*.{ts,tsx}',\n ],\n theme: {\n \tcontainer: {\n \t\tcenter: 'true',\n \t\tpadding: '2rem',\n \t\tscreens: {\n \t\t\t'2xl': '1400px'\n \t\t}\n \t},\n \textend: {\n \t\tfontFamily: {\n \t\t\tsans: [var(--font-geist-sans), ...defaultTheme.fontFamily.sans],\n \t\t\tmono: [var(--font-geist-mono), ...defaultTheme.fontFamily.mono]\n \t\t},\n \t\tkeyframes: {\n \t\t\t'accordion-down': {\n \t\t\t\tfrom: {\n \t\t\t\t\theight: '0'\n \t\t\t\t},\n \t\t\t\tto: {\n \t\t\t\t\theight: 'var(--radix-accordion-content-height)'\n \t\t\t\t}\n \t\t\t},\n \t\t\t'accordion-up': {\n \t\t\t\tfrom: {\n \t\t\t\t\theight: 'var(--radix-accordion-content-height)'\n \t\t\t\t},\n \t\t\t\tto: {\n \t\t\t\t\theight: '0'\n \t\t\t\t}\n \t\t\t}\n \t\t},\n \t\tanimation: {\n \t\t\t'accordion-down': 'accordion-down 0.2s ease-out',\n \t\t\t'accordion-up': 'accordion-up 0.2s ease-out'\n \t\t},\n \t\ttypography: '(theme: (path: string) => string) => ({\\n DEFAULT: {\\n css: {\\n :first-child: {\\n marginTop: theme(margin.0),\\n },\\n h1, h2, h3, h4, h5, h6: {\\n fontWeight: theme(fontWeight.semibold),\\n letterSpacing: theme(letterSpacing.tight),\\n marginBottom: theme(margin.4),\\n\\n + h1, + h2, + h3, + h4, + h5, + h6: {\\n marginTop: theme(margin.0),\\n },\\n },\\n h1: {\\n fontSize: theme(fontSize.3xl),\\n marginTop: theme(margin.16),\\n },\\n h2: {\\n fontSize: theme(fontSize.2xl),\\n },\\n h3: {\\n fontSize: theme(fontSize.xl),\\n },\\n h4: {\\n fontSize: theme(fontSize.lg),\\n },\\n h5: {\\n fontSize: theme(fontSize.base),\\n },\\n h6: {\\n fontSize: theme(fontSize.base),\\n },\\n table: {\\n boxShadow: `0 0 0 1px ${theme(colors.gray.200)}`,\\n borderRadius: theme(borderRadius.md),\\n overflow: hidden,\\n // eslint-disable-next-line id-length\\n p: {\\n margin: 0,\\n },\\n th: {\\n paddingTop: 0.5714286em,\\n paddingRight: 0.5714286em,\\n paddingBottom: 0.5714286em,\\n paddingLeft: 0.5714286em,\\n backgroundColor: theme(colors.gray.100),\\n &:not(:last-child): {\\n borderRightWidth: 1px,\\n borderRightColor: theme(colors.gray.200),\\n },\\n },\\n tbody td, tfoot td: {\\n paddingLeft: 0.5714286em,\\n &:not(:last-child): {\\n borderRightWidth: 1px,\\n borderRightColor: theme(colors.gray.200),\\n },\\n },\\n },\\n code: {\\n &::before, &::after: {\\n display: none,\\n },\\n },\\n pre: {\\n backgroundColor: transparent,\\n borderWidth: 1,\\n borderColor: theme(colors.gray.200),\\n },\\n },\\n },\\n invert: {\\n css: {\\n table: {\\n boxShadow: `0 0 0 1px ${theme(colors.gray.700)}`,\\n th: {\\n backgroundColor: theme(colors.gray.800),\\n &:not(:last-child): {\\n borderRightColor: theme(colors.gray.700),\\n },\\n },\\n tbody td, tfoot td: {\\n &:not(:last-child): {\\n borderRightColor: theme(colors.gray.700),\\n },\\n },\\n },\\n pre: {\\n borderColor: theme(colors.gray.800),\\n },\\n },\\n },\\n })'\n \t}\n },\n plugins: [animate, typography],\n};\n\nexport default config;\n" Stack: Error: Error replacing tree: The children of the old and new trees were expected to have the same count (7:12). at ParentFinderReplacementNodeHandler.handleChildren (/Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/ts-morph/dist/ts-morph.js:1436:19) at ParentFinderReplacementNodeHandler.handleNode (/Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/ts-morph/dist/ts-morph.js:1430:18) at ParentFinderReplacementNodeHandler.handleNode (/Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/ts-morph/dist/ts-morph.js:1570:19) at doManipulation (/Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/ts-morph/dist/ts-morph.js:2282:21) at insertIntoParentTextRange (/Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/ts-morph/dist/ts-morph.js:2317:5) at ObjectLiteralExpression.replaceWithText (/Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/ts-morph/dist/ts-morph.js:3644:9) at zt (file:///Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:5:4822) at async _t (file:///Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:5:3839) at async Fe (file:///Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:5:3433) at async ee (file:///Users/haydenbleasel/.npm/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:14:8868) ``` ``` ### System Info ```bash MacOS 14.6.1 (23G93), Node v20.12.2 ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,501,614,040
ollama
RPI with armhrf architecture support
### What is the issue? I followed all possible guide online, either using curl and your install.sh, the docker, or the snap package and I could manage to install ollama, not clear how to compile the code (there is no configure file). The main issue is that everyzthing has been prepared for arm64 and not for my architecture armhrf with I need for my current display. Is there as way to install the arm64 on the armhrf architecture? ### OS Linux ### GPU _No response_ ### CPU Other ### Ollama version any
feature request,build
low
Minor
2,501,615,190
ui
[bug]: DropdownMenuSubTrigger hover/focus bg not working
### Describe the bug https://github.com/user-attachments/assets/c562db81-04b8-4ab3-9ceb-0870c3121ccf As shown in the video, when I hover on the theme menu item, it will correctly pick up focused bg, but soon it will become grey. How to make it consistent and use focused bg? ``` <DropdownMenuSubTrigger className="focus:bg-bg-weak-50 hover:bg-bg-weak-50 hover:rounded-[var(--radius-8,8px)] rounded-[var(--radius-8,8px)] bg-[var(--bg-white-0,#FFF)] flex p-3 items-center gap-2 self-stretch text-[var(--text-strong-950,#0E121B)] font-inter text-sm font-normal leading-[20px] tracking-[-0.084px] cursor-pointer"> {theme === 'light' ? lightIcon : darkIcon} {theme === 'light' ? 'Light' : 'Dark'} </DropdownMenuSubTrigger> ``` ### Affected component/components <DropdownMenuSubTrigger className="focus:bg-bg-weak-50 hover:bg-bg-weak-50 hover:rounded-[var(--radius-8,8px)] rounded-[var(--radius-8,8px)] bg-[var(--bg-white-0,#FFF)] flex p-3 items-center gap-2 self-stretch text-[var(--text-strong-950,#0E121B)] font-inter text-sm font-normal leading-[20px] tracking-[-0.084px] cursor-pointer"> ### How to reproduce Use ``` <DropdownMenuSubTrigger className="focus:bg-bg-weak-50 hover:bg-bg-weak-50 hover:rounded-[var(--radius-8,8px)] rounded-[var(--radius-8,8px)] bg-[var(--bg-white-0,#FFF)] flex p-3 items-center gap-2 self-stretch text-[var(--text-strong-950,#0E121B)] font-inter text-sm font-normal leading-[20px] tracking-[-0.084px] cursor-pointer"> {theme === 'light' ? lightIcon : darkIcon} {theme === 'light' ? 'Light' : 'Dark'} </DropdownMenuSubTrigger> ``` ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash Chrome ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,501,681,970
godot
Godot crashes when opening a project
### Tested versions This happened to me on the latest versions (4.3) on Steam and Chrome (64-bit) ### System information Windows 10 Intel core i5-7300HQ CPU 2.GHZ 16 GB RAM GeForce gtx 1050 ### Issue description Godot closes by itself when opening a project or creating it. I tried to fix the problem by restarting my PC and installing the steam and google versions ### Steps to reproduce This is my first question on github, I don't know what this is XDD ### Minimal reproduction project (MRP) Also :v
needs testing,crash
low
Critical
2,501,704,385
PowerToys
Files locksmith. Does not appear in context menu.
### Microsoft PowerToys version 0.83 ### Installation method PowerToys auto-update ### Running as admin Yes ### Area(s) with issue? File Locksmith ### Steps to reproduce Does not appear in context menu. Changed options, rebooted - nothing. ![lock](https://github.com/user-attachments/assets/ae933c1a-435c-46bf-9901-b5463c7c25ee) ### ✔️ Expected Behavior Show File Locksmith punkt menu ### ❌ Actual Behavior Null ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,501,727,567
godot
GDShader preprocessor evaluates weird functions in `#if` condition expressions
### Tested versions - Reproducible in v4.3.stable.flathub [77dcf97d82cbfe4e4615475fa52ca03da645dbd8] ### System information Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 23.08 (Flatpak runtime) - X11 - Vulkan (Forward+) - integrated Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz (4 Threads) ### Issue description In GDShader `#if` condition expressions, you can use several operators (more than I expected it to allow) and even functions (e.g. math like `sqrt`, `sin`), which will be evaluated in the preprocessor. Which ones are undocumented. At first I expected those to at least match the GDShader functions and operator syntax. But I found it's not always the case (e.g. ternary `? :` doesn't work, `inversesqrt(...)` doesn't either). After some more tests (see https://github.com/godotengine/godot/issues/96253#issuecomment-2318882070 for background), I found it's accepting functions in `@GlobalScope` as well as some constructors like `bool` etc. It allows even functions very weird to allow, like `randf`, `instance_from_id` and `rid_from_int64`. ***Allowing even stuff like `bytes_to_var_with_objects` seems like it could be particularly dangerous (ACE risk?).*** But I don't know for sure to which extent it allows functions with side-effects and whether it affects the editor. In any case, I'm surprised things like functions work at all. I was expecting that dealing with integers and booleans would be enough for a preprocessor. In fact, ***if the intention is to match C/GLSL, then GDShader is doing way more than it should*** in the preprocessor `#if` directives. ***Not even C deals with float or boolean constants in the preprocessor, let alone math functions. GLSL doesn't either.*** They do handle integers without the `u` suffix (so no `uint` support). C does handle arbitrary names, treating them like `0` (even `true` is treated like `0`), but GLSL doesn't allow them on `#if`. GDShader allows most literals (bool, int_decimal, int_hex, uint_decimal, float), except for uint_hex like `0x0u`. Comments from @pirey0: > 7 - function calls in #if directives: > Really interesting stuff! Looks like it could be very useful and at the same time very dangerous. Again probably a topic for a rendering meeting (unless this was already discussed in the past.) > I brought up this Issue in the 2024-09-03 Rendering Meeting, these are the key points: > - Aim for glsl-like behavior, so that shaders can be ported over very easily > - Do not bloat the preprocessor codebase > - May need to scope back some of the eval() stuff in #if directives IMO, GDShader should match GLSL, and be as safe as possible: - only deterministic behavior must be allowed - no side-effects - no preprocessor functions, only macros - no dealing with float at all; no boolean literals either - there's no need to extrapolate C/GLSL behavior - keep current GLSL-like (not C-like) behavior of not expanding undefined macro names as `0` - so even though `true` isn't a keyword, it won't expand to `0` by default; raise "undefined macro" error instead - only integer literals should be allowed, as well as macros that evaluate to integers; - like expected in C and GLSL, have 0 mean false and non-zero mean true when on the context of boolean operators (like `&&` and `||`) and the final `#if` expression boolean result - no arbitrary expressions, only allow: - int32 literals (decimal and hex) without `u` suffix - parentheses - macro expansions (simple and function-like calls) - logical and arithmetic operators to deal with integers as said above - the `defined(macro_name)` (also `defined macro_name`) preprocessor keyword is the only "function" allowed A lot of these behaviors are explicitly defined in the [GLSL ES 3.00 spec](https://registry.khronos.org/OpenGL/specs/es/3.0/GLSL_ES_Specification_3.00.pdf) pages 13~14. --- Moved from [#96253 (sub-issue 7).](https://github.com/godotengine/godot/issues/96253#issuecomment-2318882070) ### Steps to reproduce `bug-global-fn-evaluation.gdshader` ```glsl shader_type spatial; #if (sin(0) + typeof(print(rid_allocate_id())) - typeof(print(bytes_to_var_with_objects([])))) // can typing here have editor side-effects? it prints to console at least... // bytes_to_var_with_objects = arbitrary code execution risk? no idea code, etc #endif #if randf() < 0.5 I have heard of non deterministic compilation but this is crazy // 50% chance of this code being included every time I type #endif $ // error on purpose to log preprocessor output ``` ### Minimal reproduction project (MRP) N/A
bug,topic:shaders
low
Critical
2,501,752,367
pytorch
[torch.jit] INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/symbolic_shape_analysis.cpp":741, please report a bug to PyTorch. input_shapes size: 1 n inputs size: 2
### 🐛 Describe the bug An INTERNAL ASSERT error will be raised when using the `torch.jit.script`and`torch._C._jit_pass_propagate_shapes_on_graph_and_build_compute`. The code is as follows: ```python import torch @torch.jit.script def foo1(a, b, x, y): return a / b + torch.cat([]) @torch.jit.script def foo2(a, b, x, y): return a / b + torch.cat([y], dim=-2) for foo in (foo1, foo2): g = foo.graph for inp in foo.graph.inputs(): inp.setType(inp.type().with_sizes([])) shape_compute_graph = torch._C._jit_pass_propagate_shapes_on_graph_and_build_compute(foo.graph) ``` Error messages: > Traceback (most recent call last): > File "/data/code.py", line 18, in <module> > shape_compute_graph = torch._C._jit_pass_propagate_shapes_on_graph_and_build_compute(foo.graph) > RuntimeError: input_shapes.size() >= n->inputs().size() INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/symbolic_shape_analysis.cpp":741, please report a bug to PyTorch. input_shapes size: 1 n inputs size: 2 The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` . ### Versions Collecting environment information... PyTorch version: 2.5.0.dev20240815+cpu Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31 Is CUDA available: False CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: N/A GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 Nvidia driver version: 535.183.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz Stepping: 6 CPU MHz: 800.000 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 5800.00 Virtualization: VT-x L1d cache: 1.5 MiB L1i cache: 1 MiB L2 cache: 40 MiB L3 cache: 48 MiB NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] onnx==1.16.2 [pip3] onnxruntime==1.19.0 [pip3] onnxscript==0.1.0.dev20240816 [pip3] pytorch-triton==3.0.0+dedb7bdf33 [pip3] torch==2.5.0.dev20240815+cpu [pip3] torch-xla==2.4.0 [pip3] torch_xla_cuda_plugin==2.4.0 [pip3] torchaudio==2.4.0.dev20240815+cu121 [pip3] torchvision==0.20.0.dev20240815+cu121 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi [conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi [conda] torch-xla 2.4.0 pypi_0 pypi [conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi [conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi [conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
oncall: jit
low
Critical
2,501,763,684
storybook
[Bug]: The preview.js 'globals' field is deprecated and will be removed in Storybook 9.0.
### Describe the bug I think @storybook/addon-themes is adding to the deprecated globals field. <img width="844" alt="image" src="https://github.com/user-attachments/assets/6d66c548-23a0-4d9a-a423-47ad94830416"> causing an unexpected error to be thrown that me, as a user, cannot fix. ### Reproduction link https://stackblitz.com/edit/github-edphde?file=.storybook%2Fmain.js ### Reproduction steps 1. Add a theme with tailwind following https://github.com/storybookjs/storybook/blob/next/code/addons/themes/docs/getting-started/tailwind.md 2. open the console to see the error ### System ```bash Storybook Environment Info: System: OS: macOS 14.5 CPU: (12) arm64 Apple M3 Pro Shell: 5.9 - /bin/zsh Binaries: Node: 20.12.2 - ~/.local/state/fnm_multishells/70585_1725326396129/bin/node npm: 10.8.2 - ~/.local/state/fnm_multishells/70585_1725326396129/bin/npm <----- active Browsers: Chrome: 128.0.6613.114 Safari: 17.5 npmPackages: @storybook/addon-actions: ^8.2.9 => 8.2.9 @storybook/addon-essentials: ^8.2.9 => 8.2.9 @storybook/addon-interactions: ^8.2.9 => 8.2.9 @storybook/addon-links: ^8.2.9 => 8.2.9 @storybook/addon-themes: ^8.2.9 => 8.2.9 @storybook/react: ^8.2.9 => 8.2.9 @storybook/react-vite: ^8.2.9 => 8.2.9 @storybook/test: ^8.2.9 => 8.2.9 msw-storybook-addon: ^2.0.3 => 2.0.3 storybook: ^8.2.9 => 8.2.9 ``` ### Additional context I tried to repro this in stackblitz but it is using beta versions of storybook and may have been fixed?
bug,needs triage
low
Critical
2,501,773,768
vscode
Markdown language detection is overly aggressive and cannot be disabled
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.92.2 - OS Version: macOS Sonoma 14.6.1 Steps to Reproduce: 1. Open a new file 2. Type `how about vscode not do this` (or your preferred string). Or paste any URL (like https://github.com/microsoft/vscode/issues/227416). 3. VS Code detects the file as Markdown for some reason? This happens if History Based Language Detection is enabled or disabled. The `workbench.editor.languageDetection` setting is of no use here, so it doesn't seem like this can be disabled. When I click on the Select Language Mode button, the probability that I am switching from Markdown to Plain Text is approximately 100%. Please fix Markdown language detection, or give us a setting to turn it off for specific languages.
feature-request,languages-guessing
low
Critical
2,501,786,900
rust
Cargo fix error
``` ❯ cargo fix --bin aplang --allow-dirty 6 Checking aplang v0.1.0 (/code/rust/aplang) warning: failed to automatically apply fixes suggested by rustc to crate `aplang` after fixes were automatically applied the compiler reported errors within these files: * src/aplang_std/file_system.rs * src/aplang_std/std_macros.rs * src/ast.rs * src/interpreter.rs * src/parser.rs This likely indicates a bug in either rustc or cargo itself, and we would appreciate a bug report! You're likely to see a number of compiler warnings after this message which cargo attempted to fix but failed. If you could open an issue at https://github.com/rust-lang/rust/issues quoting the full output of this command we'd be very appreciative! Note that you may be able to make some more progress in the near-term fixing code with the `--broken-code` flag The following errors were reported: warning: unused import: `Diagnostic` --> src/parser.rs:7:22 | 7 | use miette::{miette, Diagnostic, LabeledSpan, NamedSource, Report}; | ^^^^^^^^^^ | = note: `#[warn(unused_imports)]` on by default warning: unused variable: `import_stmt` --> src/ast.rs:359:30 | 359 | Stmt::Import(import_stmt) => Box::new( | ^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_import_stmt` | = note: `#[warn(unused_variables)]` on by default warning: unused variable: `b` --> src/interpreter.rs:223:17 | 223 | let (a, b) = self.functions.get(&function_name).ok_or( | ^ help: if this is intentional, prefix it with an underscore: `_b` warning: unused variable: `l` --> src/interpreter.rs:776:23 | 776 | (op, List(l)) => Err( | ^ help: if this is intentional, prefix it with an underscore: `_l` warning: unused variable: `lp_token` --> src/parser.rs:113:13 | 113 | let lp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_lp_token` warning: unused variable: `peeked` --> src/parser.rs:138:25 | 138 | let peeked = self.peek(); | ^^^^^^ help: if this is intentional, prefix it with an underscore: `_peeked` warning: unused variable: `rp_token` --> src/parser.rs:172:13 | 172 | let rp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_rp_token` warning: unused variable: `token` --> src/parser.rs:145:56 | 145 | let token = self.consume(&Identifier, |token| miette!( | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:266:36 | 266 | .consume(&RightBrace, |token| { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:307:38 | 307 | self.consume(&SoftSemi, |token| miette!{ | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:321:45 | 321 | let mod_token = self.consume(&Mod, |token| miette! { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:325:59 | 325 | let import_string = self.consume(&StringLiteral, |token| miette! { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:329:34 | 329 | self.consume(&SoftSemi, |token| miette! { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `lp_token` --> src/parser.rs:342:13 | 342 | let lp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_lp_token` warning: unused variable: `rp_token` --> src/parser.rs:362:13 | 362 | let rp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_rp_token` warning: unused variable: `lp_token` --> src/parser.rs:466:13 | 466 | let lp_token = self.consume(&LeftParen, |token| { | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_lp_token` warning: unused variable: `rp_token` --> src/parser.rs:483:13 | 483 | let rp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_rp_token` warning: unused variable: `token` --> src/parser.rs:445:31 | 445 | .consume(&Until, |token| { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `next_token` --> src/parser.rs:990:33 | 990 | ... let next_token = self.peek(); | ^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_next_token` warning: unused variable: `report` --> src/parser.rs:1292:21 | 1292 | let report = report_handler(); | ^^^^^^ help: if this is intentional, prefix it with an underscore: `_report` warning: unused variable: `report` --> src/parser.rs:1304:21 | 1304 | let report = report_handler(); | ^^^^^^ help: if this is intentional, prefix it with an underscore: `_report` warning: unused variable: `iter` --> src/aplang_std/std_macros.rs:11:25 | 11 | let iter = args.into_iter(); | ^^^^ help: if this is intentional, prefix it with an underscore: `_iter` | ::: src/aplang_std/time.rs:9:5 | 9 | / std_function!(env.functions=> fn TIME() { 10 | | 11 | | let now = SystemTime::now(); 12 | | let unix_time_ms = now.duration_since(UNIX_EPOCH).expect("TIME WENT BACKWARDS???"); 13 | | 14 | | return Ok(Value::Number(unix_time_ms.as_millis() as f64)) 15 | | }); | |______- in this macro invocation | = note: this warning originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) warning: unused variable: `e` --> src/aplang_std/file_system.rs:56:20 | 56 | if let Err(e) = write!(file, "{}" , contents) { | ^ help: if this is intentional, prefix it with an underscore: `_e` warning: unused variable: `e` --> src/aplang_std/file_system.rs:71:20 | 71 | if let Err(e) = write!(file, "{}", contents){ | ^ help: if this is intentional, prefix it with an underscore: `_e` error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/file_system.rs:15:5 | 15 | / std_function!(env.functions => fn PATH_EXISTS(path: Value) { 16 | | unwrap_arg_type!(path => Value::String); 17 | | 18 | | let exists = Path::new(&path).exists(); 19 | | 20 | | return Ok(Value::Bool(exists)) 21 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/file_system.rs:24:5 | 24 | / std_function!(env.functions => fn FILE_CREATE(file_path: Value) { 25 | | unwrap_arg_type!(file_path => Value::String); 26 | | 27 | | return match File::create_new(file_path) { ... | 30 | | } 31 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/file_system.rs:34:5 | 34 | / std_function!(env.functions => fn FILE_READ(file_path: Value) { 35 | | unwrap_arg_type!(file_path => Value::String); 36 | | 37 | | return match fs::read_to_string(file_path) { ... | 45 | | } 46 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/file_system.rs:49:5 | 49 | / std_function!(env.functions => fn FILE_APPEND(file_path: Value, contents: Value) { 50 | | unwrap_arg_type!(file_path => Value::String); 51 | | 52 | | let Ok(mut file) = OpenOptions::new().append(true).open(file_path) else { ... | 60 | | return Ok(Value::Bool(true)) 61 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/file_system.rs:64:5 | 64 | / std_function!(env.functions => fn FILE_OVERWRITE(file_path: Value, contents: Value) { 65 | | unwrap_arg_type!(file_path => Value::String); 66 | | 67 | | let Ok(mut file) = OpenOptions::new().write(true).truncate(true).open(file_path) else { ... | 75 | | return Ok(Value::Bool(true)) 76 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/mod.rs:48:5 | 48 | / std_function!(env.functions => fn DISPLAY(value: Value) { 49 | | println!("{}", value); 50 | | 51 | | return Ok(Value::Null) 52 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/mod.rs:54:5 | 54 | / std_function!(env.functions => fn INSERT(list: Value, i: Value, value: Value) { 55 | | unwrap_arg_type!(list => Value::List); 56 | | unwrap_arg_type!(i => Value::Number); 57 | | ... | 61 | | return Ok(Value::Null) 62 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/mod.rs:64:5 | 64 | / std_function!(env.functions => fn APPEND(list: Value, value: Value) { 65 | | unwrap_arg_type!(list => Value::List); 66 | | 67 | | list.borrow_mut().push(value.clone()); 68 | | 69 | | return Ok(Value::Null) 70 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/mod.rs:72:5 | 72 | / std_function!(env.functions => fn REMOVE(list: Value, i: Value) { 73 | | unwrap_arg_type!(list => Value::List); 74 | | unwrap_arg_type!(i => Value::Number); 75 | | ... | 79 | | return Ok(poped); 80 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error[E0596]: cannot borrow `iter` as mutable, as it is not declared as mutable --> src/aplang_std/std_macros.rs:12:35 | 12 | $( let $arg = iter.next().unwrap();)* | ^^^^ cannot borrow as mutable | ::: src/aplang_std/mod.rs:82:6 | 82 | / std_function!(env.functions => fn LENGTH(list: Value) { 83 | | unwrap_arg_type!(list => Value::List); 84 | | 85 | | let len = list.borrow().len() as f64; 86 | | 87 | | return Ok(Value::Number(len)) 88 | | }); | |______- in this macro invocation | = note: this error originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider changing this to be mutable | 11 | let mut iter = args.into_iter(); | +++ error: aborting due to 10 previous errors; 24 warnings emitted For more information about this error, try `rustc --explain E0596`. Original diagnostics will follow. warning: unnecessary parentheses around type --> src/lexer.rs:399:18 | 399 | type Error = (String); | ^ ^ | = note: `#[warn(unused_parens)]` on by default help: remove these parentheses | 399 - type Error = (String); 399 + type Error = String; | warning: unnecessary parentheses around type --> src/lexer.rs:413:18 | 413 | type Error = (String); | ^ ^ | help: remove these parentheses | 413 - type Error = (String); 413 + type Error = String; | warning: unused import: `crate::token::TokenType::Identifier` --> src/parser.rs:1255:9 | 1255 | use crate::token::TokenType::Identifier; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: `#[warn(unused_imports)]` on by default warning: unused imports: `Token`, `get_keywords_hashmap` --> src/parser.rs:1256:24 | 1256 | use crate::token::{get_keywords_hashmap, Token}; | ^^^^^^^^^^^^^^^^^^^^ ^^^^^ warning: unused imports: `Severity`, `miette` --> src/parser.rs:1257:18 | 1257 | use miette::{miette, Report, Severity}; | ^^^^^^ ^^^^^^^^ warning: unused import: `crate::aplang_std::file_system::file_system` --> src/aplang_std/mod.rs:6:5 | 6 | use crate::aplang_std::file_system::file_system; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ warning: unused import: `crate::errors::RuntimeError` --> src/aplang_std/time.rs:4:5 | 4 | use crate::errors::RuntimeError; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ warning: unused import: `std::rc::Rc` --> src/aplang_std/std_macros.rs:1:5 | 1 | use std::rc::Rc; | ^^^^^^^^^^^ warning: unused import: `crate::errors::RuntimeError` --> src/aplang_std/std_macros.rs:2:5 | 2 | use crate::errors::RuntimeError; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ warning: unused imports: `Env`, `Interpreter`, `NativeProcedure`, `Value` --> src/aplang_std/std_macros.rs:3:26 | 3 | use crate::interpreter::{Env, Interpreter, NativeProcedure, Value}; | ^^^ ^^^^^^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^ warning: unused import: `std::sync::Arc` --> src/aplang_std/std_macros.rs:4:5 | 4 | use std::sync::Arc; | ^^^^^^^^^^^^^^ warning: unused import: `Diagnostic` --> src/parser.rs:7:22 | 7 | use miette::{miette, Diagnostic, LabeledSpan, NamedSource, Report}; | ^^^^^^^^^^ warning: unused variable: `import_stmt` --> src/ast.rs:359:30 | 359 | Stmt::Import(import_stmt) => Box::new( | ^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_import_stmt` | = note: `#[warn(unused_variables)]` on by default warning: unused variable: `b` --> src/interpreter.rs:223:17 | 223 | let (a, b) = self.functions.get(&function_name).ok_or( | ^ help: if this is intentional, prefix it with an underscore: `_b` warning: variable does not need to be mutable --> src/interpreter.rs:368:21 | 368 | let mut values = match self.expr(&for_each.list)? { | ----^^^^^^ | | | help: remove this `mut` | = note: `#[warn(unused_mut)]` on by default warning: variable does not need to be mutable --> src/interpreter.rs:369:33 | 369 | Value::List(mut list) => list, | ----^^^^ | | | help: remove this `mut` warning: variable does not need to be mutable --> src/interpreter.rs:657:21 | 657 | if let Some(mut target) = list.borrow_mut().get_mut((idx - 1.0) as usize) { | ----^^^^^^ | | | help: remove this `mut` warning: unused variable: `l` --> src/interpreter.rs:776:23 | 776 | (op, List(l)) => Err( | ^ help: if this is intentional, prefix it with an underscore: `_l` warning: unused variable: `lp_token` --> src/parser.rs:113:13 | 113 | let lp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_lp_token` warning: unused variable: `peeked` --> src/parser.rs:138:25 | 138 | let peeked = self.peek(); | ^^^^^^ help: if this is intentional, prefix it with an underscore: `_peeked` warning: unused variable: `rp_token` --> src/parser.rs:172:13 | 172 | let rp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_rp_token` warning: unused variable: `token` --> src/parser.rs:145:56 | 145 | let token = self.consume(&Identifier, |token| miette!( | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:266:36 | 266 | .consume(&RightBrace, |token| { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:307:38 | 307 | self.consume(&SoftSemi, |token| miette!{ | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:321:45 | 321 | let mod_token = self.consume(&Mod, |token| miette! { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:325:59 | 325 | let import_string = self.consume(&StringLiteral, |token| miette! { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `token` --> src/parser.rs:329:34 | 329 | self.consume(&SoftSemi, |token| miette! { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `lp_token` --> src/parser.rs:342:13 | 342 | let lp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_lp_token` warning: unused variable: `rp_token` --> src/parser.rs:362:13 | 362 | let rp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_rp_token` warning: unused variable: `lp_token` --> src/parser.rs:466:13 | 466 | let lp_token = self.consume(&LeftParen, |token| { | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_lp_token` warning: unused variable: `rp_token` --> src/parser.rs:483:13 | 483 | let rp_token = self | ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_rp_token` warning: unused variable: `token` --> src/parser.rs:445:31 | 445 | .consume(&Until, |token| { | ^^^^^ help: if this is intentional, prefix it with an underscore: `_token` warning: unused variable: `next_token` --> src/parser.rs:990:33 | 990 | ... let next_token = self.peek(); | ^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_next_token` warning: unused variable: `report` --> src/parser.rs:1292:21 | 1292 | let report = report_handler(); | ^^^^^^ help: if this is intentional, prefix it with an underscore: `_report` warning: unused variable: `report` --> src/parser.rs:1304:21 | 1304 | let report = report_handler(); | ^^^^^^ help: if this is intentional, prefix it with an underscore: `_report` warning: unused variable: `iter` --> src/aplang_std/std_macros.rs:15:29 | 15 | let mut iter = args.into_iter(); | ^^^^ help: if this is intentional, prefix it with an underscore: `_iter` | ::: src/aplang_std/time.rs:10:5 | 10 | / std_function!(env.functions=> fn TIME() { 11 | | 12 | | let now = SystemTime::now(); 13 | | let unix_time_ms = now.duration_since(UNIX_EPOCH).expect("TIME WENT BACKWARDS???"); 14 | | 15 | | return Ok(Value::Number(unix_time_ms.as_millis() as f64)) 16 | | }); | |______- in this macro invocation | = note: this warning originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) warning: variable does not need to be mutable --> src/aplang_std/std_macros.rs:15:25 | 15 | let mut iter = args.into_iter(); | ----^^^^ | | | help: remove this `mut` | ::: src/aplang_std/time.rs:10:5 | 10 | / std_function!(env.functions=> fn TIME() { 11 | | 12 | | let now = SystemTime::now(); 13 | | let unix_time_ms = now.duration_since(UNIX_EPOCH).expect("TIME WENT BACKWARDS???"); 14 | | 15 | | return Ok(Value::Number(unix_time_ms.as_millis() as f64)) 16 | | }); | |______- in this macro invocation | = note: this warning originates in the macro `std_function` (in Nightly builds, run with -Z macro-backtrace for more info) warning: unused variable: `e` --> src/aplang_std/file_system.rs:56:20 | 56 | if let Err(e) = write!(file, "{}" , contents) { | ^ help: if this is intentional, prefix it with an underscore: `_e` warning: unused variable: `e` --> src/aplang_std/file_system.rs:71:20 | 71 | if let Err(e) = write!(file, "{}", contents){ | ^ help: if this is intentional, prefix it with an underscore: `_e` warning: type `interpreter::Context` is more private than the item `Env::venv` --> src/interpreter.rs:145:5 | 145 | pub venv: Vec<Context>, | ^^^^^^^^^^^^^^^^^^^^^^ field `Env::venv` is reachable at visibility `pub(crate)` | note: but type `interpreter::Context` is only usable at visibility `pub(self)` --> src/interpreter.rs:149:1 | 149 | struct Context { | ^^^^^^^^^^^^^^ = note: `#[warn(private_interfaces)]` on by default warning: type `interpreter::Context` is more private than the item `Env::scrape` --> src/interpreter.rs:179:5 | 179 | pub fn scrape(&mut self) -> Context { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ method `Env::scrape` is reachable at visibility `pub(crate)` | note: but type `interpreter::Context` is only usable at visibility `pub(self)` --> src/interpreter.rs:149:1 | 149 | struct Context { | ^^^^^^^^^^^^^^ warning: `aplang` (bin "aplang") generated 41 warnings (run `cargo fix --bin "aplang"` to apply 15 suggestions) Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.17s ```
T-compiler,T-cargo,C-bug,D-incorrect,S-needs-repro
low
Critical
2,501,865,328
react-native
Currency unit EURO (€) is spoken only for Google Pixel device and not for Samsung devices in Talkback
### Description Currency unit EURO (represented by the € symbol) is spoken ONLY for Google Pixel device and NOT for Samsung devices ### Steps to reproduce 1) Install the application with yarn android 2) Enable Talkback 3) Listen to speech output for prices ### React Native Version 0.73.6 ### Affected Platforms Runtime - Android ### Output of `npx react-native info` ```text System: OS: macOS 14.6.1 CPU: (10) arm64 Apple M1 Pro Memory: 62.84 MB / 16.00 GB Shell: version: "5.9" path: /bin/zsh Binaries: Node: version: 18.13.0 path: ~/.asdf/installs/nodejs/18.13.0/bin/node Yarn: version: 1.22.22 path: ~/.asdf/installs/nodejs/18.13.0/bin/yarn npm: version: 8.19.3 path: ~/.asdf/plugins/nodejs/shims/npm Watchman: version: 2024.05.06.00 path: /opt/homebrew/bin/watchman Managers: CocoaPods: version: 1.12.1 path: /Users/I583816/.asdf/shims/pod SDKs: iOS SDK: Platforms: - DriverKit 23.5 - iOS 17.5 - macOS 14.5 - tvOS 17.5 - visionOS 1.2 - watchOS 10.5 Android SDK: API Levels: - "30" - "31" - "32" - "33" - "33" - "34" Build Tools: - 29.0.2 - 30.0.3 - 31.0.0 - 32.0.0 - 33.0.0 - 33.0.1 - 33.0.2 - 34.0.0 - 34.0.0 - 34.0.0 System Images: - android-30 | Google APIs ARM 64 v8a - android-31 | Google APIs ARM 64 v8a - android-31 | Google Play ARM 64 v8a - android-33 | Google APIs ARM 64 v8a - android-34 | Google Play ARM 64 v8a Android NDK: Not Found IDEs: Android Studio: 2022.3 AI-223.8836.35.2231.11005911 Xcode: version: 15.4/15F31d path: /usr/bin/xcodebuild Languages: Java: version: 17.0.9 path: /Users/I583816/Library/Java/JavaVirtualMachines/corretto-17.0.9/Contents/Home/bin/javac Ruby: version: 3.1.1 path: /Users/I583816/.asdf/shims/ruby npmPackages: "@react-native-community/cli": Not Found react: installed: 18.2.0 wanted: 18.2.0 react-native: installed: 0.73.6 wanted: 0.73.6 react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: true newArchEnabled: false iOS: hermesEnabled: true newArchEnabled: false ``` ### Stacktrace or Logs ```text N/A ``` ### Reproducer N/A ### Screenshots and Videos _No response_
Needs: Repro,Newer Patch Available,Needs: Attention
low
Major
2,501,866,034
PowerToys
Remap CapsLock to Ctrl does not work in Hyper-V Windows 11 VM
### Microsoft PowerToys version v0.83.0 ### Installation method Scoop ### Running as admin None ### Area(s) with issue? Keyboard Manager ### Steps to reproduce Configure Caps Lock -> Ctrl in the Keyboard Manager: ![image](https://github.com/user-attachments/assets/2f6b6ff4-36ae-423d-b52e-3887de83e56a) ### ✔️ Expected Behavior When I use `Ctrl+<KEY>`, it should work the same as `CapsLock + <KEY>`. For example CTRL+N launches a new windows in Google Chrome. ### ❌ Actual Behavior `CapsLock + <KEY>` seems to just do `<KEY>`. When I click to configure the keyboard remapping inside PowerToys, it does actually pick up `Ctrl (left)` when I clicked CapsLock. So on some level it is reconfigured and sending the ctrl key instead of CapsLock. ### Other Software In Hyper-V configuration, it is set to use "virtual machine" keyboard when on full-screen. ![image](https://github.com/user-attachments/assets/3ed56360-584e-4271-8811-88bd27d402bc) `fastfetch` results from the VM: ``` PS C:\Users\windows1user1> fastfetch ///////////////// ///////////////// windows1user1@windows1 ///////////////// ///////////////// ---------------------- ///////////////// ///////////////// OS: Windows 11 (Pro) x86_64 ///////////////// ///////////////// Host: Virtual Machine (Hyper-V UEFI Release v4.1) ///////////////// ///////////////// Kernel: WIN32_NT 10.0.22631.4037 (23H2) ///////////////// ///////////////// Uptime: 16 mins ///////////////// ///////////////// Packages: 13 (scoop) ///////////////// ///////////////// Shell: Windows PowerShell Display (Default_Monitor): 3072x1920 @ 32 Hz (as 1536x960) ///////////////// ///////////////// DE: Fluent ///////////////// ///////////////// WM: Desktop Window Manager ///////////////// ///////////////// WM Theme: Custom - Mint dark (System: Dark, Apps: Dark) ///////////////// ///////////////// Icons: Recycle Bin ///////////////// ///////////////// Font: Segoe UI (12pt) [Caption / Menu / Message / Status] ///////////////// ///////////////// Cursor: Windows Default (32px) ///////////////// ///////////////// Terminal: Windows Terminal 1.21.2408.23001 ///////////////// ///////////////// Terminal Font: Cascadia Mono (12pt) CPU: AMD Ryzen 9 6900HS Creator Edition (8) @ 4.92 GHz GPU 1: Microsoft Hyper-V Video [Integrated] GPU 2: Microsoft Remote Display Adapter Memory: 4.38 GiB / 5.80 GiB (76%) Swap: 39.87 MiB / 1.38 GiB (3%) Disk (C:\): 34.87 GiB / 126.13 GiB (28%) - NTFS Disk (Y:\): 422.33 GiB / 953.06 GiB (44%) - NTFS [External] Local IP (Ethernet): 192.168.1.137/24 Battery: 65% [Discharging] Locale: en-US ```
Issue-Bug,Needs-Triage
low
Minor
2,501,879,647
pytorch
[BUG] found a weird problem of nn.conv2d in pytorch 2.2.0
### 🐛 Describe the bug I got weird result of nn.conv2d in T4 GPU the conv2d result seems wrong in different batch size. I could understand different shape maybe run different kernel, but the diff seems too large for float16. BTW, in the attachment I found it seems run the same kernel so why the result have this large diffs before I run the script, I ran : export CUDA_VISIBLE_DEVICES=3 ## code ```python import torch import torch.nn as nn import nvtx loaded_conv_layer = nn.Conv2d(320, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) # Load the state_dict into the new Conv2d layer loaded_conv_layer.load_state_dict(torch.load("conv_model.pth")) # Move to CUDA if required loaded_conv_layer = loaded_conv_layer.to(torch.float16).to("cuda:0") conv_not_same = nvtx.start_range(message="conv_not_same", color="green") testa = torch.ones((4,320,48,48)).to(torch.float16).to("cuda:0") testb = torch.ones((2,320,48,48)).to(torch.float16).to("cuda:0") ra = loaded_conv_layer(testa) rb = loaded_conv_layer(testb) approx_equal = torch.allclose(rb, ra[0:2], rtol=1e-05, atol=1e-08) print(f"conv result not same", approx_equal) abs_diff = torch.abs(rb - ra[0:2]) max_diff = abs_diff.max() mean_diff = abs_diff.mean() print("Max Absolute Difference:", max_diff.item()) print("Mean Absolute Difference:", mean_diff.item()) nvtx.end_range(conv_not_same) print(" <============>") conv_nearly_same = nvtx.start_range(message="conv_nearly_same", color="red") testa = torch.ones((4,320,48,48)).to(torch.float16).to("cuda:0") testa.fill_(0.1) testb = torch.ones((2,320,48,48)).to(torch.float16).to("cuda:0") testb.fill_(0.1) ra = loaded_conv_layer(testa) rb = loaded_conv_layer(testb) approx_equal = torch.allclose(rb, ra[0:2], rtol=1e-05, atol=1e-08) print(f"conv result nearly same", approx_equal) abs_diff = torch.abs(rb - ra[0:2]) max_diff = abs_diff.max() mean_diff = abs_diff.mean() print("Max Absolute Difference:", max_diff.item()) print("Mean Absolute Difference:", mean_diff.item()) print(" <============>") nvtx.end_range(conv_nearly_same) conv_zero_same = nvtx.start_range(message="conv_zero_same", color="green") testa = torch.ones((4,320,48,48)).to(torch.float16).to("cuda:0") testa.fill_(0) testb = torch.ones((2,320,48,48)).to(torch.float16).to("cuda:0") testb.fill_(0) ra = loaded_conv_layer(testa) rb = loaded_conv_layer(testb) approx_equal = torch.allclose(rb, ra[0:2], rtol=1e-05, atol=1e-08) print(f"conv result zero same", approx_equal) abs_diff = torch.abs(rb - ra[0:2]) max_diff = abs_diff.max() mean_diff = abs_diff.mean() print("Max Absolute Difference:", max_diff.item()) print("Mean Absolute Difference:", mean_diff.item()) print(" <============>") nvtx.end_range(conv_zero_same) conv_same = nvtx.start_range(message="conv_same", color="blue") testa = torch.ones((4,320,16,16)).to(torch.float16).to("cuda:0") testb = torch.ones((2,320,16,16)).to(torch.float16).to("cuda:0") ra = loaded_conv_layer(testa) rb = loaded_conv_layer(testb) approx_equal = torch.allclose(rb, ra[0:2], rtol=1e-05, atol=1e-08) print(f"conv result same", approx_equal) abs_diff = torch.abs(rb - ra[0:2]) max_diff = abs_diff.max() mean_diff = abs_diff.mean() print("Max Absolute Difference:", max_diff.item()) print("Mean Absolute Difference:", mean_diff.item()) nvtx.end_range(conv_same) ``` ## result ``` conv result not same False Max Absolute Difference: 0.03125 Mean Absolute Difference: 1.0728836059570312e-06 <============> conv result nearly same False Max Absolute Difference: 0.00390625 Mean Absolute Difference: 1.6093254089355469e-06 <============> conv result zero same True Max Absolute Difference: 0.0 Mean Absolute Difference: 0.0 <============> conv result same True Max Absolute Difference: 0.0 Mean Absolute Difference: 0.0 ``` and here is the nsys result and the pth [nsys-and-pth.zip](https://github.com/user-attachments/files/16843292/nsys-and-pth.zip) ### Versions ## versions ```python pip list | grep torch efficientnet-pytorch 0.7.1 open-clip-torch 2.24.0 torch 2.2.0 torchmetrics 1.4.0.post0 torchsde 0.2.6 torchvision 0.17.0 ``` ## hardware ```bash nvidia-smi Tue Sep 3 04:02:23 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 12.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:12:00.0 Off | 0 | | N/A 74C P0 31W / 70W | 11962MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 On | 00000000:13:00.0 Off | 0 | | N/A 56C P8 16W / 70W | 2MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 Tesla T4 On | 00000000:37:00.0 Off | 0 | | N/A 60C P8 17W / 70W | 2MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 Tesla T4 On | 00000000:86:00.0 Off | 0 | | N/A 58C P8 16W / 70W | 2MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` cc @csarofeen @ptrblck @xwang233
module: numerical-stability,module: cudnn,triaged
low
Critical
2,501,882,970
vscode
Deprecate API setNotebookMetadata in ipynb extension
This was created when notebook API wasn't stable, but was required for Julia extension. Now that the API is stable, we can remove this = After Julia extension adopts the new API & stops using this old API. See here for more details https://github.com/julia-vscode/julia-vscode/pull/2424
debt,notebook
low
Minor
2,501,885,978
pytorch
[RFC] A device-agnostic Python device memory related API design for stream-based accelerators
# Motivation This RFC aims to propose a design for a series of generic memory-related APIs tailored for stream-based accelerators to help users simplify the runtime code written for different devices (including out-of-tree devices, a registration mechanism is provided for them). # Background Currently, PyTorch only provides device-specific memory-related APIs usually used in other components or workloads, like torch/ao(https://github.com/pytorch/ao/pull/753/) and huggingface([reference](https://github.com/huggingface/transformers/blob/fd3238b4b0e1af4fae4a293cbfd1251ead40cd29/src/transformers/trainer_utils.py#L532)). And the user can call this API like this. ```python if device = "cuda": torch.cuda.reset_peak_memory_stats() torch.cuda.empty_cache() max_memory_reserved = torch.cuda.max_memory_reserved() elif device= "xpu": torch.xpu.reset_peak_memory_stats() torch.xpu.empty_cache() max_memory_reserved = torch.xpu.max_memory_reserved() elif ... ``` PyTorch needs to provide a generic memory-related API design to help users write device-agnostic code. # Design Our design is following [[RFC] A device-agnostic Python runtime API design for stream-based accelerators](https://github.com/pytorch/pytorch/issues/128403) and generalize `torch.xxx.foo` to `torch.acc.foo`. So we can simplify this above example: ```python torch.reset_peak_memory_stats(device_type) torch.empty_cache(device_type) max_memory_reserved = torch.max_memory_reserved(device_type) ``` We can give a list including the most used memory-related APIs. <div align="center"> <table> <tr> <td> Device-specific memory APIs torch.xxx.foo</td> <td> Device-agnostic memory APIs torch.foo</td> </tr> <tr> <td> ```python torch.xxx.empty_cache ``` </td> <td> ```python torch.acc.empty_cache ``` </td> </tr> <tr> <td> ```python torch.xxx.reset_peak_memory_stats ``` </td> <td> ```python torch.acc.reset_peak_memory_stats ``` </td> </tr> <tr> <td> ```python torch.xxx.reset_accumulated_memory_stats ``` </td> <td> ```python torch.acc.reset_accumulated_memory_stats ``` </td> </tr> <tr> <td> ```python torch.xxx.memory_stats ``` </td> <td> ```python torch.acc.memory_stats ``` </td> </tr> <tr> <td> ```python torch.xxx.memory_allocated ``` </td> <td> ```python torch.acc.memory_allocated ``` </td> </tr> <tr> <td> ```python torch.xxx.max_memory_allocated ``` </td> <td> ```python torch.acc.max_memory_allocated ``` </td> </tr> <tr> <td> ```python torch.xxx.memory_reserved ``` </td> <td> ```python torch.acc.memory_reserved ``` </td> </tr> <tr> <td> ```python torch.xxx.max_memory_reserved ``` </td> <td> ```python torch.acc.max_memory_reserved ``` </td> </tr> </table> </div> **Our goal** is torch.acc.foo can cover the most common runtime scenarios and usages. And using **if**/**else** statement and torch.xxx.foo as an additional supplement in other situations. ### Additional context We will implement this design on top of [ [RFC] Intel GPU Runtime Upstreaming for Allocator](https://github.com/pytorch/pytorch/issues/116322). After we unify a device allocator abstract, given as `DeviceAllocator`(temp name), it will define memory API interface like `emptyCache`, `resetPeakStats`, `resetAccumulatedStats` and `getDeviceStats`. And each backend can implement its own device allocator and leverage the allocator registration mechanism to register it. This is also a nice solution for the out-of-tree backend. At this moment, we can use `GetAllocator(device_type)->emptyCache()` as a unified API in the C++ side for each accelerator. Finally, we implement python API `torch.acc.empty_cache` on top of the C++ unified APi `GetAllocator(device_type)->emptyCache()`, illustrated as below. ![image](https://github.com/user-attachments/assets/104efcc3-f627-4756-bc1c-b24cf5a9d1e2) cc @albanD
triaged,module: python frontend
low
Minor
2,501,893,667
storybook
[Bug]: >= v8.2.0 Storybook versions no longer splits the sb-manager into chunks
### Describe the bug After upgrading from 8.1.11 to 8.2 onwards, I noticed that sb-manager files no longer gets split into chunks by webpackFinal's configurations: `config.optimization = { ...config.optimization, splitChunks: { chunks: 'all', maxInitialRequests: Infinity, minSize: 30 * 1024, maxSize: 500 * 1024, }, };` ### Reproduction link https://stackblitz.com/edit/github-xmmg5f?file=.storybook%2Fmain.ts ### Reproduction steps 1. Initiate a new project using react-webpack5 as the framework. 2. run `npm run build-storybook` 3. Inside storybook-static/sb-manager, the js files are not split into chunks. 4. Expected to be split as of 8.1.11 ### System ```bash Storybook Environment Info: System: OS: macOS 14.6.1 CPU: (10) arm64 Apple M1 Pro Shell: 5.9 - /bin/zsh Binaries: Node: 18.18.2 - ~/.nvm/versions/node/v18.18.2/bin/node npm: 9.8.1 - ~/.nvm/versions/node/v18.18.2/bin/npm <----- active Browsers: Chrome: 128.0.6613.114 Safari: 17.6 npmPackages: @storybook/addon-essentials: ^8.2.9 => 8.2.9 @storybook/addon-interactions: ^8.2.9 => 8.2.9 @storybook/addon-links: ^8.2.9 => 8.2.9 @storybook/addon-styling-webpack: ^1.0.0 => 1.0.0 @storybook/addon-themes: ^8.2.9 => 8.2.9 @storybook/addon-webpack5-compiler-swc: ^1.0.5 => 1.0.5 @storybook/blocks: ^8.2.9 => 8.2.9 @storybook/react: ^8.2.9 => 8.2.9 @storybook/react-webpack5: ^8.2.9 => 8.2.9 @storybook/test: ^8.2.9 => 8.2.9 storybook: ^8.2.9 => 8.2.9 ``` ### Additional context _No response_
bug,sev:S3,upgrade:8.2
low
Critical
2,501,898,827
ollama
Unable to resolve Cuda-drivers on RHEL8.9
### What is the issue? While in RHEL8.9 Ollama Installation cannot finish due to an issue while installing cuda-drivers. Nvidia repository is successfully installed and I can see cuda-drivers are listed there but when triggering repolist only cuda-drivers-fabricmanager are listed. I an running crazy with this issue, I have Ollama installed in other similar instances but since the latest update it seems I cannot install it for some reason. Hoping anyone here had the same issue and can guide me to fix it. `>>> Installing CUDA driver... Updating Subscription Management repositories. Unable to read consumer identity Last metadata expiration check: 0:00:06 ago on Tue 03 Sep 2024 08:27:42 AM +04. All matches were filtered out by modular filtering for argument: cuda-drivers Error: Unable to find a match: cuda-drivers` Thanks in advance. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.7
bug
low
Critical
2,501,925,688
kubernetes
ReplicaSet do not update latest failure condition into status
### What happened? If I have 2 admissions to check before a POD can be create, the 2 admissions checks run one after one (not in parallel), with 2 different error messages. I'll use the below admission check for POD creation as an example to reproduce the bug: ``` if currentTime.Minute() < 40 { admissionResponse.Allowed = false admissionResponse.Result = &metav1.Status{ Reason: metav1.StatusReasonForbidden, Message: "Pod creation is not allowed during the first 40 minutes of the hour.", } } else { admissionResponse.Allowed = false admissionResponse.Result = &metav1.Status{ Reason: metav1.StatusReasonForbidden, Message: "Pod creation is not allowed during the second 20 minutes of the hour.", } } ``` Which means, in the first 40 minutes of every hour, the error message will be "Pod creation is not allowed during the first 40 minutes of the hour.", while in the left 20 minutes of every hour, the error message will be "Pod creation is not allowed during the second 20 minutes of the hour". If I create a deployment now, in the left 20 minutes (xx:40 - xx:59), the ReplicaSet will be: Describe result: ``` Conditions: Type Status Reason ---- ------ ------ ReplicaFailure True FailedCreate Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 40m (x19 over 95m) replicaset-controller Error creating: admission webhook "pod-blocker-service.pod-blocker.com" denied the request: Pod creation is not allowed during the first 40 minutes of the hour. Warning FailedCreate 7m2s (x4 over 84m) replicaset-controller Error creating: admission webhook "pod-blocker-service.pod-blocker.com" denied the request: Pod creation is not allowed during the second 20 minutes of the hour. ``` Status: ``` status: conditions: - lastTransitionTime: "2024-09-03T02:31:25Z" message: 'admission webhook "pod-blocker-service.pod-blocker.com" denied the request: Pod creation is not allowed during the first 40 minutes of the hour.' reason: FailedCreate status: "True" type: ReplicaFailure observedGeneration: 1 replicas: 0 ``` Apparently, the second error message "Pod creation is not allowed during the second 20 minutes of the hour." is not updated into the status condition. ### What did you expect to happen? The status shows the up to date error condition: ``` status: conditions: - lastTransitionTime: "2024-09-03T02:31:25Z" message: 'admission webhook "pod-blocker-service.pod-blocker.com" denied the request: Pod creation is not allowed during the second 20 minutes of the hour.' reason: FailedCreate status: "True" type: ReplicaFailure observedGeneration: 1 replicas: 0 ``` ### How can we reproduce it (as minimally and precisely as possible)? Just create 2 admissions to block POD creation one by one, with different error message. After a period of time, let the first admission pass, and the second admission continue to block. Which should show the error message from the second admission in the ReplicaSet status condition. ### Anything else we need to know? It appears this block of code introduced the above behavior: https://github.com/kubernetes/kubernetes/blob/e5bafe2bed13fe72e88a3597930bdfbab5267e9d/pkg/controller/replicaset/replica_set_utils.go#L110 It only set condition when failed condition is `nil`; if the failed condition is still there, it just won't set the up to date condition. ### Kubernetes version <details> ```console $ kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:21:56Z", GoVersion:"go1.18.4", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-22T05:28:27Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"linux/arm64"} ``` But from the code, it seems still in the latest version </details> ### Cloud provider <details> Reproduced on KinD </details> ### OS version <details> ```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
kind/bug,priority/awaiting-more-evidence,sig/apps,needs-triage
low
Critical
2,502,028,334
pytorch
[Intel GPU] Symbol conflict between libm.lib and ucrt.lib for Intel GPU on Windows
### 🐛 Describe the bug Currently, building Intel GPU for Windows requires sourcing Intel GPU development bundle for Windows to setup development environment for PyTorch. But the bundle contains `libm.lib` to support C99 and contains math functions. and the And the lib will be added to `Lib` path, while it conflicts with Windows native `ucrt.lib`. The error message may be as follows. ```cmake ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: ldexp already defined in libm.lib(ldexp_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: copysign already defined in libm.lib(copysign_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: copysignf already defined in libm.lib(copysignf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: expf already defined in libm.lib(expf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: tanhf already defined in libm.lib(tanhf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: sinf already defined in libm.lib(sinf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: logf already defined in libm.lib(logf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: sqrtf already defined in libm.lib(sqrtf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: cosf already defined in libm.lib(cosf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: floorf already defined in libm.lib(floorf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: ceil already defined in libm.lib(ceil_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: ceilf already defined in libm.lib(ceilf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: floor already defined in libm.lib(floor_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: log2f already defined in libm.lib(log2f_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: truncf already defined in libm.lib(truncf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: acosf already defined in libm.lib(acosf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: asinf already defined in libm.lib(asinf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: atan2f already defined in libm.lib(atan2f_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: atanf already defined in libm.lib(atanf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: coshf already defined in libm.lib(coshf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: log10f already defined in libm.lib(log10f_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: powf already defined in libm.lib(powf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: sinhf already defined in libm.lib(sinhf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: tanf already defined in libm.lib(tanf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: fmodf already defined in libm.lib(fmodf_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: round already defined in libm.lib(round_iface_c99.obj) ucrt.lib(api-ms-win-crt-math-l1-1-0.dll) : error LNK2005: roundf already defined in libm.lib(roundf_iface_c99.obj) Creating library lib\torch_cpu.lib and object lib\torch_cpu.exp bin\torch_cpu.dll : fatal error LNK1169: one or more multiply defined symbols found ninja: build stopped: subcommand failed. ``` ### Versions Collecting environment information... PyTorch version: 2.5.0a0+git3f3774a Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Enterprise GCC version: Could not collect Clang version: Could not collect CMake version: version 3.24.1 Libc version: N/A Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:29:51) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=2400 DeviceID=CPU0 Family=207 L2CacheSize=14336 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2400 Name=12th Gen Intel(R) Core(TM) i9-12900 ProcessorType=3 Revision= Versions of relevant libraries: [pip3] numpy==2.1.0 [pip3] optree==0.12.1 [pip3] torch==2.5.0a0+git3f3774a [conda] mkl-include 2024.2.1 pypi_0 pypi [conda] mkl-static 2024.2.1 pypi_0 pypi [conda] numpy 2.1.0 pypi_0 pypi [conda] optree 0.12.1 pypi_0 pypi [conda] torch 2.5.0a0+git3f3774a pypi_0 pypi cc @gujinghui @fengyuan14 @guangyey
triaged,module: xpu
low
Critical
2,502,047,544
pytorch
DISABLED test_serialization_array_with_storage (__main__.TestCuda)
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialization_array_with_storage&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29586769088). Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_serialization_array_with_storage` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1159, in originate_pairs return [pair_type(actual, expected, id=id, **options)] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2677, in __init__ super().__init__(actual, expected, **other_parameters) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 649, in __init__ actual, expected = self._process_inputs( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 675, in _process_inputs actual, expected = (self._to_tensor(input) for input in (actual, expected)) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 675, in <genexpr> actual, expected = (self._to_tensor(input) for input in (actual, expected)) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2682, in _to_tensor return torch.tensor( ValueError: could not determine the shape of object type 'torch.storage.UntypedStorage' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_cuda.py", line 392, in test_serialization_array_with_storage self.assertEqual(q_copy, q, atol=0, rtol=0) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3846, in assertEqual error_metas = not_close_error_metas( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1217, in not_close_error_metas pairs = originate_pairs( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1108, in originate_pairs originate_pairs( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1172, in originate_pairs f"Originating a {pair_type.__name__}() at item {''.join(str([item]) for item in id)} with\n\n" File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/storage.py", line 1097, in __str__ data_str = " " + "\n ".join(str(self[i]) for i in range(self.size())) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/storage.py", line 1097, in <genexpr> data_str = " " + "\n ".join(str(self[i]) for i in range(self.size())) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/storage.py", line 956, in __getitem__ return self._getitem(idx) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/storage.py", line 1000, in _getitem return tmp_tensor[idx_wrapped].item() RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) To execute this test, run the following from the base repo dir: python test/test_cuda.py TestCuda.test_serialization_array_with_storage This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_cuda.py` cc @ptrblck @msaroufim @clee2000
module: cuda,triaged,module: flaky-tests,skipped
low
Critical
2,502,092,779
flutter
consider changing the behavior of `CarouselView.padding`
## Summary - The properties within `CarouselView` are confusing because they do not include the word "item," even though they are related to individual items. - For example, `backgroundColor` and `shape` are applied to each item. - The behavior of `CarouselView.padding` differs from `ListView.padding`. - While `ListView.padding` applies to the entire `ListView`, `CarouselView.padding` applies to the padding of each individual item in the `CarouselView`, which disrupts consistency across Flutter. ### Proposal - Would it be better to rename these properties to `CarouselView.itemBackgroundColor`, and `CarouselView.itemShape` for clarity? - Also, consider changing the behavior of CarouselView.padding to match that of ListView, where it would apply to the padding of the entire viewable area.
c: new feature,framework,f: material design,d: api docs,c: proposal,P3,team-design,triaged-design
low
Minor
2,502,100,458
pytorch
[bug] import torch failed after install the cpu version of pytorch
### 🐛 Describe the bug # install CPU version pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cpu # verify torch python -c 'import torch; print(torch.cuda.is_available())' Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/bhlin6nctu/miniconda3/envs/xfaster/lib/python3.10/site-packages/torch/__init__.py", line 1480, in <modul e> _C._initExtension(manager_path()) File "/home/bhlin6nctu/miniconda3/envs/xfaster/lib/python3.10/site-packages/torch/cuda/__init__.py", line 91, in <mo dule> has_magma: bool = torch._C._has_magma AttributeError: module 'torch._C' has no attribute '_has_magma' ### Versions Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.27.7 Libc version: glibc-2.35 Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.35 Is CUDA available: N/A CUDA runtime version: 12.3.52 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB Nvidia driver version: 470.161.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 36 On-line CPU(s) list: 0-35 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz CPU family: 6 Model: 85 Thread(s) per core: 1 Core(s) per socket: 18 Socket(s): 2 Stepping: 4 BogoMIPS: 6000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke md_clear spec_ctrl intel_stibp flush_l1d Virtualization: VT-x L1d cache: 1.1 MiB (36 instances) L1i cache: 1.1 MiB (36 instances) L2 cache: 36 MiB (36 instances) L3 cache: 49.5 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-17 NUMA node1 CPU(s): 18-35 Versions of relevant libraries: [pip3] numpy==1.26.3 [pip3] torch==2.3.1+cpu [pip3] torchaudio==2.3.1+cpu [pip3] torchvision==0.18.1+cpu [pip3] triton==2.3.0 [conda] numpy 1.26.3 pypi_0 pypi [conda] torch 2.3.1+cpu pypi_0 pypi [conda] torchaudio 2.3.1+cpu pypi_0 pypi [conda] torchvision 0.18.1+cpu pypi_0 pypi [conda] triton 2.3.0 pypi_0 pypi cc @seemethere @malfet @osalpekar @atalman
needs reproduction,module: binaries,triaged
low
Critical
2,502,135,660
pytorch
Error "attn_bias is not correctly aligned" When Offloading torch.nn.functional.scaled_dot_product_attention Using torch.autograd.graph.save_on_cpu()
### 🐛 Describe the bug I'm encountering an issue while training a Llama2 model when I use torch.autograd.graph.save_on_cpu() to offload activation tensors to CPU memory. The problem arises specifically when I attempt to offload the attention layer. The implementation of self_attn is based on the LlamaSdpaAttention class from the transformers library. I have pinpointed the issue to the torch.nn.functional.scaled_dot_product_attention operator. If I offload this operation as follows: ```python with torch.autograd.graph.save_on_cpu(): attn_output = torch.nn.functional.scaled_dot_product_attention( query_states, key_states, value_states, attn_mask=causal_mask, dropout_p=self.attention_dropout if self.training else 0.0, is_causal=is_causal, ) ``` I get the following error: ``` File "/home/zbtrs/miniconda3/envs/sglang/lib/python3.10/site-packages/torch/_tensor.py", line 525, in backward torch.autograd.backward( File "/home/zbtrs/miniconda3/envs/sglang/lib/python3.10/site-packages/torch/autograd/__init__.py", line 267, in backward _engine_run_backward( File "/home/zbtrs/miniconda3/envs/sglang/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: attn_bias is not correctly aligned (strideH) .attn_bias.stride(1) = 361, and should be a multiple of 4. ``` However, if I offload other parts of the LlamaSdpaAttention class without offloading this specific operation, the issue does not occur. What could be causing this misalignment error when offloading the attention layer, specifically with torch.nn.functional.scaled_dot_product_attention, and how can I resolve it? ### Versions Collecting environment information... PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.30.1 Libc version: glibc-2.31 Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.0-187-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB GPU 2: NVIDIA A100-SXM4-80GB GPU 3: NVIDIA A100-SXM4-80GB GPU 4: NVIDIA A100-SXM4-80GB GPU 5: NVIDIA A100-SXM4-80GB GPU 6: NVIDIA A100-SXM4-80GB GPU 7: NVIDIA A100-SXM4-80GB Nvidia driver version: 535.183.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz Stepping: 6 Frequency boost: enabled CPU MHz: 900.000 CPU max MHz: 3400.0000 CPU min MHz: 800.0000 BogoMIPS: 5200.00 Virtualization: VT-x L1d cache: 3 MiB L1i cache: 2 MiB L2 cache: 80 MiB L3 cache: 96 MiB NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities Versions of relevant libraries: [pip3] flashinfer==0.1.1+cu121torch2.3 [pip3] numpy==1.26.4 [pip3] torch==2.3.0 [pip3] triton==2.3.0 [conda] flashinfer 0.1.1+cu121torch2.3 pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.3.0 pypi_0 pypi [conda] triton 2.3.0 pypi_0 pypi cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki
triaged,module: sdpa
low
Critical
2,502,144,522
vscode
TreeError [SettingsTree] Tree element not found
Steps: - fresh user data dir - start up (out of sources) - open settings editor - close welcome - quit - start again: ``` ERR color: #f33 TreeError [SettingsTree] Tree element not found: [object Object]: Error: TreeError [SettingsTree] Tree element not found: [object Object] at NonCollapsibleObjectTreeModel.getElementLocation (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/browser/ui/tree/objectTreeModel.js:218:19) at NonCollapsibleObjectTreeModel.rerender (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/browser/ui/tree/objectTreeModel.js:121:31) at SettingsTree.rerender (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/browser/ui/tree/objectTree.js:31:20) at SettingsEditor2.refreshSingleElement (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/contrib/preferences/browser/settingsEditor2.js:1257:31) at SettingsEditor2.renderTree (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/contrib/preferences/browser/settingsEditor2.js:1240:22) at vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/contrib/preferences/browser/settingsEditor2.js:1187:38 at Set.forEach (<anonymous>) at SettingsEditor2.updateElementsByKey (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/contrib/preferences/browser/settingsEditor2.js:1187:18) at UniqueContainer.value (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/contrib/preferences/browser/settingsEditor2.js:215:22) at Emitter._deliver (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:964:22) at Emitter.fire (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:993:18) at WorkspaceService.updateRestrictedSettings (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/configuration/browser/configurationService.js:738:49) at WorkspaceService.onDefaultConfigurationChanged (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/configuration/browser/configurationService.js:630:18) at UniqueContainer.value (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/configuration/browser/configurationService.js:116:110) at Emitter._deliver (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:964:22) at Emitter.fire (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:993:18) at DefaultConfiguration.onDidUpdateConfiguration (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/platform/configuration/common/configurations.js:46:40) at DefaultConfiguration.onDidUpdateConfiguration (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/configuration/browser/configuration.js:68:15) at UniqueContainer.value (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/platform/configuration/common/configurations.js:37:131) at Emitter._deliver (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:964:22) at Emitter._deliverQueue (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:975:18) at Emitter.fire (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/base/common/event.js:998:18) at ConfigurationRegistry.deltaConfiguration (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/platform/configuration/common/configurationRegistry.js:306:40) at ExtensionPoint._handler (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/api/common/configurationExtensionPoint.js:265:27) at ExtensionPoint._handle (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/extensions/common/extensionsRegistry.js:96:18) at ExtensionPoint.acceptUsers (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/extensions/common/extensionsRegistry.js:89:14) at AbstractExtensionService._handleExtensionPoint (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/extensions/common/abstractExtensionService.js:936:24) at NativeExtensionService._doHandleExtensionPoints (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/extensions/common/abstractExtensionService.js:884:44) at NativeExtensionService._processExtensions (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/extensions/common/abstractExtensionService.js:425:14) at NativeExtensionService._initialize (vscode-file://vscode-app/Users/bpasero/Development/Microsoft/vscode/out/vs/workbench/services/extensions/common/abstractExtensionService.js:381:18) ```
bug,settings-editor,confirmation-pending
low
Critical
2,502,151,419
pytorch
Setting CPU affinity doesn't work for torch.softmax when using a thread
### 🐛 Describe the bug This bug is about the cpu affinity. When I use torch.softmax in a thread of a child process, other cpus are used even though I set cpu affinity. But if I comment out torch.sofmax in the function, only cpus that I set cpu affinity are used. I think it is a bug of torch.softmax. I tested torch.sigmoid, and torch.tanh but they don't make this situation. ```python import os os.environ["MKL_NUM_THREADS"] = "1" os.environ["OMP_NUM_THREADS"] = "1" os.environ["OPENBLAS_NUM_THREADS"] = "1" import time import psutil import torch import torch.multiprocessing as mp from torch import nn from threading import Thread def f(model): while True: x = torch.randn(2, 256) with torch.no_grad(): y = model(x) p = torch.softmax(y, dim=-1) # This line is the source of the problem def run_worker(cpu_id): p = psutil.Process() p.cpu_affinity([cpu_id]) print(f"Process {p.pid} is running on CPU {cpu_id}") model = nn.Linear(256, 256) th1 = Thread(target=f, args=(model,)) th1.start() while True: time.sleep(1e-2) if __name__ == "__main__": mp.set_start_method("spawn") torch.set_num_threads(1) procs = list() for i in range(2): p = mp.Process(target=run_worker, args=(i,)) p.start() procs.append(p) for p in procs: p.join() ``` With out softmax: ![without_softmax](https://github.com/user-attachments/assets/26ae7a6a-3ff0-456d-ba1c-2e800caf1d35) With softmax: ![with_softmax](https://github.com/user-attachments/assets/7599d958-02ad-43d2-9d9a-b7f995e2ebca) If I change softmax as below, it resolves the problem. ```python # p = torch.softmax(y, dim=-1) y = torch.exp(y) p = y / y.sum(dim=-1, keepdim=True) ``` But still, if I sample using torch.distributions.Categorical(p).sample() cause the sample problem again. ```python # p = torch.softmax(y, dim=-1) y = torch.exp(y) p = y / y.sum(dim=-1, keepdim=True) a = torch.distributions.Categorical(p).sample() ``` ### Versions [pip3] numpy==2.1.0 [pip3] torch==2.4.0 [pip3] torchaudio==2.4.0 [pip3] torchvision==0.19.0 [conda] blas 2.116 mkl conda-forge [conda] blas-devel 3.9.0 16_linux64_mkl conda-forge [conda] cpuonly 2.0 0 pytorch [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libblas 3.9.0 16_linux64_mkl conda-forge [conda] libcblas 3.9.0 16_linux64_mkl conda-forge [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] liblapack 3.9.0 16_linux64_mkl conda-forge [conda] liblapacke 3.9.0 16_linux64_mkl conda-forge [conda] mkl 2022.1.0 h84fe81f_915 conda-forge [conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge [conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge [conda] numpy 2.1.0 py311h71ddf71_1 conda-forge [conda] pytorch 2.4.0 py3.11_cpu_0 pytorch [conda] pytorch-mutex 1.0 cpu pytorch [conda] torchaudio 2.4.0 py311_cpu pytorch [conda] torchvision 0.19.0 py311_cpu pytorch cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
module: cpu,triaged
low
Critical
2,502,152,845
godot
Tooltips disappear when mouse leaves initiating Element with Single Window Mode
### Tested versions - Reproducible in: v4.3.stable ### System information Linux - Artix (Arch) - Moksha (Enlightenment) - Nvidia - Vulkan ### Issue description On Linux enabling Single Windowed Mode in editor settings makes focusing and scrolling tooltips impossible, Is curious that in #57425 the issue is solved just enabling this option but in my case is just the opposite. ### Steps to reproduce 1. Open Editor 2. Enable Single Windowed Mode in Editor Settings 3. Move the mouse over to display a tooltip on any property in the inspector 4. Try to move/scroll mouse on displayed tooltip ### Minimal reproduction project (MRP) Not applicable
bug,topic:gui
low
Minor
2,502,235,482
godot
Empty line above GDScript syntax error line wrongly gets highlighted in red too
### Tested versions v4.4.dev1.official [28a72fa43] ### System information w10 64 ### Issue description Notice the red error line, it doubles up when I create an empty line above it. https://github.com/user-attachments/assets/5da25e7c-edd9-4164-b6ec-2c6e0a4e25fb ### Steps to reproduce See the video ### Minimal reproduction project (MRP) ...
bug,topic:gdscript,topic:editor
low
Critical
2,502,243,058
flutter
Figure out a way to inform developer about a11y string requires localizations
### Steps to reproduce 1. Enable TalkBack/VoiceOver on your phone 2. Set system language to italian (probably any other language different from english is ok) 3. Run the sample code below and put the screen reader focus on an item of the bottom bar ### Expected results The screen reader should be reading the label and telling the user the number of the selected tab out of the total tabs present (i.e. "tab 1 of 3" in the sample code provided) in the correct language, italian in my case. ### Actual results The screen reader always reads "tab n of m" even though the locale is set to it-IT and the phone system language is italian. ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', locale: const Locale('it', 'IT'), theme: ThemeData( colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, ), home: const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({super.key, required this.title}); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( backgroundColor: Theme.of(context).colorScheme.inversePrimary, title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text('You have pushed the button this many times:'), Text( '$_counter', style: Theme.of(context).textTheme.headlineMedium, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _incrementCounter, tooltip: 'Increment', child: const Icon(Icons.add), ), bottomNavigationBar: BottomNavigationBar( items: const [ BottomNavigationBarItem(label: 'ciao', icon: Icon(Icons.abc)), BottomNavigationBarItem(label: 'ciao', icon: Icon(Icons.ac_unit)), BottomNavigationBarItem(label: 'ciao', icon: Icon(Icons.access_alarm)), ], ), ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console Doctor summary (to see all details, run flutter doctor -v): [√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.26120.1542], locale it-IT) [√] Windows Version (Installed version of Windows is version 10 or higher) [!] Android toolchain - develop for Android devices (Android SDK version 34.0.0) X cmdline-tools component is missing Run `path/to/sdkmanager --install "cmdline-tools;latest"` See https://developer.android.com/studio/command-line for more details. X Android license status unknown. Run `flutter doctor --android-licenses` to accept the SDK licenses. See https://flutter.dev/to/windows-android-setup for more details. [√] Chrome - develop for the web [√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.1) [√] Android Studio (version 2024.1) [√] IntelliJ IDEA Community Edition (version 2024.2) [√] VS Code (version 1.92.2) [√] Connected device (3 available) [√] Network resources ! Doctor found issues in 1 category. ``` </details>
framework,f: material design,a: accessibility,a: internationalization,has reproducible steps,P2,d: docs/,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.25
low
Major
2,502,289,932
puppeteer
[Feature]: support custom arguments in the launch command
### Steps to reproduce ``` npx @puppeteer/browsers launch chrome-headless-shell@stable -- --remote-debugging-port=9222 --headless --no-sandbox ``` this always exits immediately with exit code 130 (break?) while it works for normal chrome's [testing on mac arm] ### Expected results headless chrome should run with remote debugging active ### Actual results the process exits immediately
feature,@puppeteer/browsers,P3
low
Critical
2,502,298,967
storybook
[Tracking] Reduce install footprint 🐾
## 🧑‍🤝‍🧑 Who: @JReinhold and @ndelangen This is a tracking issue for the Reduce install footprint 🐾 project. The purpose of this issue is to keep tracking of the overall status of the project and tasks, and plan everything around it. See also #29072 which is a spike we'll be doing as part of this project to _investigate_ how to move Storybook towards publishing ESM-only packages. # 📢 Want to help? Make sure you read this issue thoroughly to understand what we want to achieve and how. **Do not ask to be assigned to certain tasks**, just do them. We won't "reserve" anything to potential contributors, that system rarely work. Help out with what you want, and report findings in this issue, eg. if you've identified potential optimisations in a package. If you open pull requests with your work make sure to reference this issue and tag @JReinhold. # ⚠️ Problem One of the biggest complaints about Storybook is that it's a big and heavy dependency to add to your project. There are muliple ways to interpret the frustration, but one of the most impactful improvements we can make is to reduce the install size of the core Storybook experience. Throughout the current Storybook 8 releases we've already managed to cut the size significantly, eg. via #27039 and #28519. But there is plenty more work to be done - especially outside the core package - that will enable us to shrink Storybook even further. # 🏁 Goals The high-level goals of this project is to: 1. Reduce the _install size_ of "the core Storybook experience". That is, the packages that are included in a normal `init` process, such as `storybook`, the builders (Vite/Webpack), the renderers (React, Svelte, etc.) and the _default_ addons. 2. ... this includes reducing the dependency graph of mentioned packages, to speedup installation. 3. Set long-term baselines and goals that will keep us in check, ensuring we don't regress the install-size later. 4. Build and set up tooling to support these goals. We'll try to find ways that make it easier to maintain lean packages and reduce their footprint. This includes easy-to-access bundle and package analysis tools and reporters. 5. Identify changes that require a major version bump. This project will conclude before the next major version of Storybook, so we can't make those changes now, but we can prepare them so they are ready when we're closer to the next major release. This means we are not focusing on _performance improvements_ nor reducing the size of a _built_ Storybook. We still love those improvements, but they are not the major focus here. # 📚 Resources - [e18e.dev](e18e.dev) - provides great guides on how to cleanup, speedup and levelup (?😅) packages. - https://github.com/styfle/packagephobia#how-is-this-different great list of different tools in this space that all achieve different things - https://github.com/TheDevMinerTV/package-size-calculator - https://github.com/tinylibs/ - https://github.com/unjs/ - https://github.com/es-tooling/module-replacements/blob/main/docs/modules/README.md ## [📊 Spreadsheet with package stats](https://docs.google.com/spreadsheets/d/1FHGgXJWHCMVYkmf6-qG0zzX6Owy26fQleiy-OnhXxww/edit?usp=sharing) # 🔬 Methodology Storybook is a complex mesh of over 30 packages, therefore analysing "Storybook's" install size will be tackled from two angles: 1. **E2E tests aka sandboxes**: [Storybook sandboxes](https://github.com/storybookjs/sandboxes) represent real-world, minimal projects with Storybook installed. Storybook support countless project configurations, eg. A NextJS project, or a Vite-based Vue project. By initialising Storybook a top these minimal projects, we get a sense of what impact Storybook has on the overall `node_modules` size and the amount of dependencies added to the project. The upside of this perspective is that it directly focuses on the experience our users get. The downside of this approach - as with most E2E tests - is that it can be "flaky" because a lot of outside factors can impact the project's install size and dependency count. It's also a coarse metric, detecting significant size increases in sandboxes doesn't really help us identify where the increase is coming from. This brings us to... 2. **Unit tests aka packages**: Each individual Storybook package will have its own metric on install size and dependency graph. Measuring and detecting size increases at the package-level makes it a lot easier to pinpoint the cause of it. But laser-focusing on packages in isolations can lead to work that makes little difference to the end-user, because they don't use the Storybook packages in isolation. Eg. removing a big dependency from `@storybook/react-vite` doesn't help if that dependency is still installed by `@storybook/core`. Therefore we must keep both metrics in mind, to ensure we improve the experience for the end-user while still getting actionable results on the lower level. We'll use a breadth-first approach when doing the optimisations across the packages. At the start of the project all, the integration packages have a single task (which also applies to the core): _Identify potential optimisations_. We'll do that first across all the packages on a high-level, to identify common patterns and find big wins first. This ensures that we don't laser focus on eg. removing 15 KB from `@storybook/react` when we could instead have removed 12 MB from `@storybook/vue3-vite`. With that said, there are already known big wins in the core packages that might get a higher priority than looking into some of the integration packages. There are many approaches and tools to optimising package's size and dependencies and it's not a one-size-fits-all. https://e18e.dev/guide/cleanup.html is a good primer on how to approach it. At a high-level we'll: 1. Use https://pkg-size.dev to identify and monitor the size of the packages and their dependencies 2. Use https://npmgraph.js.org to identify and monitor the packages' dependency graph 3. Use ESBuild's metafiles that are produced when building all the packages, to identify and monitor the packages' internal structure and bundled dependencies. We can take `@storybook/core` as an example: 1. https://pkg-size.dev/@storybook/core@8.3.0-beta.4 shows that there are two major esbuild dependencies significantly contributing to the size. Can we replace it? See #29082 2. https://npmgraph.js.org/?q=@storybook/core@8.3.0-beta.4#select=exact%3Aexpress%404.20.0 shows that [express](https://expressjs.com/) (among others) contribute to a big part of the dependency graph. Can we replace it? See #29083 3. ESBuild's metafiles for `@storybook/core` (not currently public) shows that [Prettier](https://prettier.io) makes up a big part of the bundled output. Can we replace it, or remove it? See #29084 # 🚩 Milestones ## 📈 Baselines and Bechmarks **See [spreadsheet detailing all the packages](https://docs.google.com/spreadsheets/d/1FHGgXJWHCMVYkmf6-qG0zzX6Owy26fQleiy-OnhXxww/edit?usp=sharing).** ```[tasklist] ### Tasks - [ ] https://github.com/storybookjs/storybook/issues/29099 - [ ] https://github.com/storybookjs/storybook/issues/29100 - [ ] qweqwe ``` ## 🔧 Optimisations This section includes all the actual package optimisations that we want to make. The list is highly dynamic and should change a lot during the project. ```[tasklist] ### Architectural changes - [ ] https://github.com/storybookjs/storybook/issues/29084 - [ ] https://github.com/storybookjs/storybook/issues/29166 - [ ] https://github.com/storybookjs/storybook/issues/29164 - [ ] ~Re-establish CJS auto-deduplication with esbuild using [cjs-splitting.ts from tsup](https://github.com/egoist/tsup/blob/796fc5030f68f929fecde7c94732e9a586ba7508/src/plugins/cjs-splitting.ts)~ - [ ] https://github.com/storybookjs/storybook/issues/29217 ``` <details> <summary><h3>📦 Core Packages</h3></summary> ```[tasklist] ### `@storybook/core` Optimizations - [x] Identify potential optimisations - [ ] https://github.com/storybookjs/storybook/issues/29082 - [ ] https://github.com/storybookjs/storybook/issues/29083 - [ ] #28262 - [ ] #28315 - [ ] #28611 - [ ] #28981 - [ ] #28663 - [ ] https://github.com/storybookjs/storybook/issues/29167 - [ ] https://github.com/storybookjs/storybook/issues/29252 - [ ] https://github.com/storybookjs/storybook/issues/29168 - [ ] https://github.com/storybookjs/storybook/issues/29104 - [ ] https://github.com/storybookjs/storybook/pull/29126 - [ ] https://github.com/storybookjs/storybook/issues/29227 - [ ] https://github.com/storybookjs/storybook/issues/29103 - [ ] https://github.com/storybookjs/storybook/issues/29229 - [ ] https://github.com/storybookjs/storybook/issues/29161 - [ ] Investigate migrating away from `@yarn/ziplib` and friends - [ ] https://github.com/storybookjs/storybook/issues/29143 ``` ```[tasklist] ### `create-storybook` Optimizations - [x] Identify potential optimisations - [ ] Don't depend on `storybook` in `create-storybook` - [ ] Don't depend on `prettier` to autoformat the main-config - [ ] Prebundle dependencies in `create-storybook` - [ ] Document `npm create storybook` instead of `npx storybook init` in docs ``` 👆 Doing all of the above `create-storybook` tasks should result in reducing its install size **from 73 MB to <1.8 MB** and the dependency count **from 172 to ~3**, greatly reducing the time-to-init. A counter argument is that a lot of these dependencies comes from `storybook` which could be globally cached by the package manager. So taking `storybook` out of `create-storybook` would just move the download of that package to from pre-init to post-init - but it's just a theory. ```[tasklist] ### `@storybook/source-loader` Optimizations - [ ] Identify potential optimisations ``` </details> <details> <summary><h3>🧩 Integration Packages</h3></summary> <details> <summary><h4>Builders</h4></summary> ```[tasklist] ### `@storybook/builder-vite` - [x] Identify potential optimisations - [x] Prebundle dependencies ``` ```[tasklist] ### `@storybook/builder-webpack5` - [x] Identify potential optimisations - [ ] https://github.com/storybookjs/storybook/issues/29181 - [ ] Prebundle dependencies - [ ] Discuss making Webpack a peer dependency of `builder-webpack5` in 9.0, similar to `builder-vite` - [ ] Discuss dropping support for checking types with `fork-ts-checker-webpack-plugin` - [ ] Discuss merging all the preset packages into the frameworks (eg. merge `@storybook/preset-react-webpack` into `@storybook/react-webpack5`) - [ ] Discuss merging `@storybook/core-webpack` into `@storybook/builder-webpack5` ``` </details> <details> <summary><h4>Renderers</h4></summary> ```[tasklist] ### `@storybook/react` - [x] Identify potential optimisations - [x] Cleanup dependencies - remove unused and move `@types` to `devDependencies` - [x] Prebundle dependencies ``` ```[tasklist] ### `@storybook/vue3` - [x] Identify potential optimisations - [ ] Cleanup dependencies - remove unused and move `@types` to `devDependencies` - [ ] Prebundle dependencies ``` ```[tasklist] ### `@storybook/web-components` - [x] Identify potential optimisations ``` </details> <details> <summary><h4>Frameworks</h4></summary> ```[tasklist] ### `@storybook/react-vite` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/react-webpack5` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/nextjs` - [x] Identify potential optimisations - [ ] https://github.com/storybookjs/storybook/issues/29185 - [ ] https://github.com/storybookjs/storybook/issues/29195 - [ ] https://github.com/storybookjs/storybook/issues/29188 - [ ] Prebundle possible dependencies in `@storybook/nextjs` ``` ```[tasklist] ### `@storybook/experimental-nextjs-vite` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/angular` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/vue3-vite` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/vue3-webpack5` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/sveltekit` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/svelte-vite` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/web-components-vite` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/web-components-webpack5` - [ ] Identify potential optimisations ``` </details> <details> <summary><h4>Addons</h4></summary> ```[tasklist] ### `@storybook/addon-docs` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/blocks` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-interactions` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-actions` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/test` - [x] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-controls` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-measure` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-outline` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-backgrounds` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-toolbars` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-viewport` - [ ] Identify potential optimisations ``` ```[tasklist] ### `@storybook/addon-onboarding` - [ ] Identify potential optimisations ``` </details> </details> ## 🛠️ Supporting Tooling Ideally we'd want to set up some tooling that allows to easily inspect package and dependency sizes, to make it easier to optimise these areas. It's still TBD what shape or form these could take. ```[tasklist] ### 🌟 Essential tooling - [ ] https://github.com/storybookjs/storybook/issues/29105 - [ ] Improve readability and usefulness of existing sandbox benchmark reports on PRs - [x] Reduce the friction of inspecting packages' build outputs - [ ] https://github.com/storybookjs/storybook/issues/29322 ``` ```[tasklist] ### ✨ Nice-to-have tooling - [ ] https://github.com/storybookjs/storybook/issues/29251 - [ ] Reduce the friction of inspecting packages' dependency graph - [ ] https://github.com/storybookjs/storybook/issues/29106 ``` Related: https://github.com/evanw/esbuild/issues/3909 ## 🤷 Misc ```[tasklist] ### 🎁 Wrap up - [ ] Create a new tracking issue containing the known optimisations we didn't had time to do in this project - [ ] Compile a list of optimisations we want to do for 9.0 👇 - [ ] Publish a blog post about the reduced footprint ``` ```[tasklist] ### 💥 Breaking optimisations - [ ] https://github.com/storybookjs/storybook/issues/29159 - [ ] https://github.com/storybookjs/storybook/issues/29160 ``` ```[tasklist] ### 💖 Nice-to-haves - [ ] Investigate the impact on running `storybook dev` at the end of `storybook init` - [ ] https://github.com/storybookjs/storybook/issues/29073 ```
help wanted,Tracking
medium
Major
2,502,317,787
ollama
Report better error message on old drivers (show detected version and minimum requirement)
### What is the issue? [log.txt](https://github.com/user-attachments/files/16846086/log.txt) This is a log ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.9
feature request,nvidia
low
Critical
2,502,321,666
angular
There is no way to subscribe to control validators presence changes
### Which @angular/* package(s) are relevant/related to the feature request? forms ### Description There is no way to be notified when someone added/removed validator to formControl. ### Proposed solution I propose new stream `validatorsChanges` ``` this.control.validatorsChanges.subscribe(() => ...) ``` ### Alternatives considered -
area: forms,forms: change detection
low
Major
2,502,366,711
pytorch
[BUG]: torch.profiler is blocked in "_disable_profiler()" when profiling cuda graph on single GPU
### 🐛 Describe the bug I use torch.profiler to profile a cuda graph, but the program is blocked in function `_disable_profiler()` of torch.profiler when torch.profiler stop the trace. minimal example: ``` import torch a = torch.tensor(1, device='cuda') # WarnUp before cuda graph capture for _ in range(11): b = a + a # Graph Capture g = torch.cuda.CUDAGraph() with torch.cuda.graph(g): b = a + a # Torch.profiler with torch.profiler.profile(): g.replay() torch.cuda.synchronize() ``` I use pdb to the locate the stuck area, the picture below shows torch.profiler is blocked in `_disable_profiler()` which is a CPU function: <img width="643" alt="Screenshot 2024-09-03 at 16 51 06" src="https://github.com/user-attachments/assets/abbae824-117b-4cee-87fc-87c6661cce8b"> ### Versions Platforms: linux Hardware: 1*A100 Software versions: **cuda11.8**, python3.10.14 PyTorch version: 2.3.1+cu118 I tried it on pytorch2.1.2, pytorch2.3.1+cu118 and pytorch2.4.0+cu118 and got the same result. Update: I tried it on pytorch 2.3.0+cu121 and the torch.profiler is not blocked, and I got the right result. Thus I think maybe there are bugs in pytorchxxx+cu118. cc @robieta @chaekit @aaronenyeshi @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
oncall: profiler
low
Critical
2,502,401,122
pytorch
[Bug] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: torch._dynamo.mark_dynamic(a, 0,min=1,max=8196) min don't work
### 🐛 Describe the bug ``` import torch import torch_tensorrt from typing import Optional, Sequence,Dict,List from torch.nn import functional as F from tzrec.modules.mlp import MLP from torch import nn @torch.fx.wrap def _get_dict(grouped_features_keys: List[str], args:List[torch.Tensor])->Dict[str, torch.Tensor]: if len(grouped_features_keys) != len(args): raise ValueError( "The number of grouped_features_keys must match " "the number of arguments." ) grouped_features = { key: value for key, value in zip(grouped_features_keys, args) } return grouped_features @torch.fx.wrap def _arange(end: int, device: torch.device) -> torch.Tensor: return torch.arange(end, device=device) class MatMul(torch.nn.Module): def __init__(self): super().__init__() self.keys = ["query","sequence","sequence_length"] attn_mlp= {'hidden_units': [256, 64], 'dropout_ratio': [], 'activation': 'nn.ReLU', 'use_bn': False} self.mlp = MLP(in_features=41 * 4, **attn_mlp) self.linear = nn.Linear(self.mlp.hidden_units[-1], 1) def forward(self, *args1: List[torch.Tensor]): """Forward the module.""" # use predict to avoid trace error in self._output_to_prediction(y) return self.predict(args1) def predict(self, args: List[torch.Tensor]): grouped_features= _get_dict(self.keys, args) query = grouped_features["query"] sequence = grouped_features["sequence"] sequence_length = grouped_features["sequence_length"] max_seq_length = sequence.size(1) sequence_mask = _arange( max_seq_length, device=sequence_length.device ).unsqueeze(0) < sequence_length.unsqueeze(1) queries = query.unsqueeze(1).expand(-1, max_seq_length, -1) attn_input = torch.cat( [queries, sequence, queries - sequence, queries * sequence], dim=-1 ) return attn_input model = MatMul().eval().cuda() a=torch.randn(1, 41).cuda() b=torch.randn(1, 50,41).cuda() c=torch.randn(1).cuda() torch._dynamo.mark_dynamic(a, 0,min=1,max=8196) torch._dynamo.mark_dynamic(b, 0,min=1,max=8196) # torch._dynamo.mark_dynamic(b, 1, min=1, max=50) torch._dynamo.mark_dynamic(c, 0,min=1,max=8196) inputs = [a, b,c] print(model(*inputs)[0][0][0]) # seq_len = torch.export.Dim("seq_len", min=1, max=10) # dynamic_shapes=({2: seq_len}, {2: seq_len}) # Export the model first with custom dynamic shape constraints from torchrec.fx import symbolic_trace model = symbolic_trace(model) print(model.code) exp_program = torch.export.export(model, (*inputs,)) ``` when i use batch_size=1 will raise error, but when I use. batch_size=2 ,it's ok error: ``` WARNING:torch_tensorrt.dynamo.conversion.aten_ops_converters:Unable to import quantization op. Please install modelopt library (https://github.com/NVIDIA/TensorRT-Model-Optimizer?tab=readme-ov-file#installation) to add support for compiling quantized models Initializing pyfg... I20240903 09:34:33.319730 126517 str_utils.cc:53] AVX supported I20240903 09:34:33.319756 126517 str_utils.cc:55] FMA supported I20240903 09:34:33.319765 126517 str_utils.cc:72] AVX-512F supported I20240903 09:34:33.319772 126517 str_utils.cc:78] AVX-512VL supported I20240903 09:34:33.319779 126517 str_utils.cc:84] AVX-512BW supported I20240903 09:34:33.319787 126517 str_utils.cc:90] AVX-512DQ supported support avx512: true I20240903 09:34:33.319797 126517 str_utils.cc:194] support avx512: 1 V0903 09:34:35.910000 140056757741376 torch/fx/experimental/symbolic_shapes.py:2529] [0/0] create_env I0903 09:34:35.991000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3639] [0/0] produce_guards V0903 09:34:35.991000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][0].size()[0] 1 StrictMinMaxConstraint(warn_only=False, vr=VR[1, 8196]) V0903 09:34:35.991000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][0].size()[1] 41 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][0].stride()[0] 41 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][0].stride()[1] 1 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][0].storage_offset() 0 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].size()[0] 1 StrictMinMaxConstraint(warn_only=False, vr=VR[1, 8196]) V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].size()[1] 50 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].size()[2] 41 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].stride()[0] 2050 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].stride()[1] 41 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].stride()[2] 1 None V0903 09:34:35.992000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][1].storage_offset() 0 None V0903 09:34:35.993000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][2].size()[0] 1 StrictMinMaxConstraint(warn_only=False, vr=VR[1, 8196]) V0903 09:34:35.993000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][2].stride()[0] 1 None V0903 09:34:35.993000 140056757741376 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['args1'][2].storage_offset() 0 None E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Error while creating guard: E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Name: '' E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Source: shape_env E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Create Function: SHAPE_ENV E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Guard Types: None E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Code List: None E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Object Weakref: None E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Guarded Class Weakref: None E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] Traceback (most recent call last): E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_guards.py", line 260, in create E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] return self.create_fn(builder, self) E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 1717, in SHAPE_ENV E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] guards = output_graph.shape_env.produce_guards( E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4163, in produce_guards E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] raise ConstraintViolationError( E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['args1'][0].size()[0], L['args1'][1].size()[0], L['args1'][2].size()[0])! For more information, run with TORCH_LOGS="+dynamic". E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] - Not all values of L['args1'][0].size()[0] = L['args1'][0].size()[0] in the specified range L['args1'][0].size()[0] <= 8196 are valid because L['args1'][0].size()[0] was inferred to be a constant (1). E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] - Not all values of L['args1'][1].size()[0] = L['args1'][1].size()[0] in the specified range L['args1'][1].size()[0] <= 8196 are valid because L['args1'][1].size()[0] was inferred to be a constant (1). E0903 09:34:35.993000 140056757741376 torch/_guards.py:262] [0/0] - Not all values of L['args1'][2].size()[0] = L['args1'][2].size()[0] in the specified range L['args1'][2].size()[0] <= 8196 are valid because L['args1'][2].size()[0] was inferred to be a constant (1). E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] Created at: E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 564, in transform E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] tracer = InstructionTranslator( E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2360, in __init__ E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] output=OutputGraph( E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 313, in __init__ E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] self.init_ambient_guards() E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 452, in init_ambient_guards E0903 09:34:35.996000 140056757741376 torch/_guards.py:264] [0/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV)) ``` ### Versions ``` CPU(s): 104 On-line CPU(s) list: 0-103 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 26 Socket(s): 2 Stepping: 7 CPU max MHz: 3800.0000 CPU min MHz: 1200.0000 BogoMIPS: 5000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 1.6 MiB (52 instances) L1i cache: 1.6 MiB (52 instances) L2 cache: 52 MiB (52 instances) L3 cache: 71.5 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-103 Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] optree==0.12.1 [pip3] torch==2.4.0 [pip3] torch_tensorrt==2.4.0 [pip3] torchaudio==2.4.0 [pip3] torchelastic==0.2.2 [pip3] torchmetrics==1.0.3 [pip3] torchrec==0.8.0+cu121 [pip3] torchvision==0.19.0 [pip3] triton==3.0.0 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] mkl 2023.1.0 h213fc3f_46344 [conda] mkl-service 2.4.0 py311h5eee18b_1 [conda] mkl_fft 1.3.8 py311h5eee18b_0 [conda] mkl_random 1.2.4 py311hdb19cb5_0 [conda] numpy 1.26.4 py311h08b1b3b_0 [conda] numpy-base 1.26.4 py311hf175353_0 [conda] optree 0.12.1 pypi_0 pypi [conda] pytorch 2.4.0 py3.11_cuda12.1_cudnn9.1.0_0 pytorch [conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torch-tensorrt 2.4.0 pypi_0 pypi [conda] torchaudio 2.4.0 py311_cu121 pytorch [conda] torchelastic 0.2.2 pypi_0 pypi [conda] torchmetrics 1.0.3 pypi_0 pypi [conda] torchrec 0.8.0+cu121 pypi_0 pypi [conda] torchtriton 3.0.0 py311 pytorch [conda] torchvision 0.19.0 py311_cu121 pytorch ``` cc @ezyang @chauhang @penguinwu
triaged,oncall: pt2,module: dynamic shapes
low
Critical
2,502,443,584
node
Node-API V8 Fast API
### What is the problem this feature will solve? Node-API methods could be faster in certain cases ### What is the feature you are proposing to solve the problem? Somehow add support for V8 Fast API. ### What alternatives have you considered? _No response_
feature request,node-api
low
Minor
2,502,486,376
react-native
Emitted event from Android is not received on the JS side
### Description I migrated my [library](https://github.com/CleverTap/clevertap-react-native/tree/task/SDK-3736/Research-and-Document-RN-new-architecture-changes) to the new architecture and this issue occurs only when the `bridgeless` mode is enabled for the sample app. When I launch my app from tapping on a notification from the killed state, an event is fired immediately from the android side but it is never received on the listener in `App.js`. This works perfectly fine on the old architecture. Also for new architecture with the bridgeless mode disabled, the event is emitted before the listener is attached. This was confirmed by adding a delay. What has changed with respect to 1. The new architecture such that my event is emitted before the listener is attached? 2. The bridgeless mode such that listener never receives the event in the new architecture irrespective of the delay? ### Steps to reproduce 1. Install the android application. 2. Send a notification from firebase. 3. Tap on notification and observe the logs ### React Native Version 0.74.5 ### Affected Platforms Runtime - Android ### Areas TurboModule - The New Native Module System, Bridgeless - The New Initialization Flow ### Output of `npx react-native info` ```text System: OS: macOS 13.4.1 CPU: (8) arm64 Apple M1 Pro Memory: 105.23 MB / 16.00 GB Shell: version: "5.9" path: /bin/zsh Binaries: Node: version: 21.2.0 path: ~/.nvm/versions/node/v21.2.0/bin/node Yarn: version: 1.22.22 path: ~/.nvm/versions/node/v21.2.0/bin/yarn npm: version: 10.2.3 path: ~/.nvm/versions/node/v21.2.0/bin/npm Watchman: Not Found Managers: CocoaPods: Not Found SDKs: iOS SDK: Not Found Android SDK: Not Found IDEs: Android Studio: 2024.1 AI-241.18034.62.2411.12169540 Xcode: version: /undefined path: /usr/bin/xcodebuild Languages: Java: version: 17.0.6 path: /usr/bin/javac Ruby: version: 2.6.10 path: /usr/bin/ruby npmPackages: "@react-native-community/cli": Not Found react: installed: 18.2.0 wanted: 18.2.0 react-native: installed: 0.74.5 wanted: ^0.74.1 react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: true newArchEnabled: true iOS: hermesEnabled: Not found newArchEnabled: false ``` ### Stacktrace or Logs ```text Notice how listeners are attached after the event `CleverTapPushNotificationClicked` is sent. The below logs are with bridgeless mode enabled Sending event CleverTapPushNotificationClicked 2024-09-03 15:47:18.271 31768-31768 unknown:BridgelessReact com.reactnct W ReactHost{0}.startSurface(surfaceId = 0): Schedule 2024-09-03 15:47:18.271 31768-31768 unknown:BridgelessReact com.reactnct W ReactHost{0}.attachSurface(surfaceId = 0) 2024-09-03 15:47:18.271 31768-31856 unknown:BridgelessReact com.reactnct W ReactHost{0}.getOrCreateReactInstanceTask() 2024-09-03 15:47:18.271 31768-31856 unknown:BridgelessReact com.reactnct W ReactHost{0}.startSurface(surfaceId = 0): Execute 2024-09-03 15:47:18.327 31768-31870 WebViewFactory com.reactnct I Loading com.google.android.webview version 103.0.5060.71 (code 506007134) 2024-09-03 15:47:18.332 31768-31870 nativeloader com.reactnct D Configuring classloader-namespace for other apk /data/app/~~EFPfHDi5gXVBA0K4QsePig==/com.google.android.trichromelibrary_506007134-afJeSXpZ5uPqQKwR6hzE5g==/TrichromeLibrary.apk. target_sdk_version=33, uses_libraries=ALL, library_path=/data/app/~~fWTShcRyzfiQmwLj_gZTPQ==/com.google.android.webview-sTWl46WYdt9sNenS1AhG-w==/lib/arm64:/data/app/~~fWTShcRyzfiQmwLj_gZTPQ==/com.google.android.webview-sTWl46WYdt9sNenS1AhG-w==/WebViewGoogle.apk!/lib/arm64-v8a:/data/app/~~EFPfHDi5gXVBA0K4QsePig==/com.google.android.trichromelibrary_506007134-afJeSXpZ5uPqQKwR6hzE5g==/TrichromeLibrary.apk!/lib/arm64-v8a, permitted_path=/data:/mnt/expand 2024-09-03 15:47:18.340 31768-31870 nativeloader com.reactnct D Configuring classloader-namespace for other apk /data/app/~~fWTShcRyzfiQmwLj_gZTPQ==/com.google.android.webview-sTWl46WYdt9sNenS1AhG-w==/WebViewGoogle.apk. target_sdk_version=33, uses_libraries=, library_path=/data/app/~~fWTShcRyzfiQmwLj_gZTPQ==/com.google.android.webview-sTWl46WYdt9sNenS1AhG-w==/lib/arm64:/data/app/~~fWTShcRyzfiQmwLj_gZTPQ==/com.google.android.webview-sTWl46WYdt9sNenS1AhG-w==/WebViewGoogle.apk!/lib/arm64-v8a:/data/app/~~EFPfHDi5gXVBA0K4QsePig==/com.google.android.trichromelibrary_506007134-afJeSXpZ5uPqQKwR6hzE5g==/TrichromeLibrary.apk!/lib/arm64-v8a, permitted_path=/data:/mnt/expand 2024-09-03 15:47:18.347 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.th3rdwave.safeareacontext.SafeAreaProviderManager 2024-09-03 15:47:18.347 31768-31768 unknown:ReactNative com.reactnct E Unable to launch logbox because react was unable to create the root view 2024-09-03 15:47:18.349 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.uimanager.LayoutShadowNode 2024-09-03 15:47:18.350 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.SearchBarManager 2024-09-03 15:47:18.351 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.gesturehandler.react.RNGestureHandlerButtonViewManager 2024-09-03 15:47:18.351 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.th3rdwave.safeareacontext.SafeAreaViewManager 2024-09-03 15:47:18.352 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.th3rdwave.safeareacontext.SafeAreaViewShadowNode 2024-09-03 15:47:18.353 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.ScreenViewManager 2024-09-03 15:47:18.354 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.gesturehandler.react.RNGestureHandlerRootViewManager 2024-09-03 15:47:18.354 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.ScreenContainerViewManager 2024-09-03 15:47:18.355 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.reactnativepagerview.PagerViewViewManager 2024-09-03 15:47:18.355 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.ModalScreenViewManager 2024-09-03 15:47:18.356 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.ScreenStackHeaderSubviewManager 2024-09-03 15:47:18.356 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.ScreenStackViewManager 2024-09-03 15:47:18.357 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.swmansion.rnscreens.ScreenStackHeaderConfigViewManager 2024-09-03 15:47:18.377 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.debuggingoverlay.DebuggingOverlayManager 2024-09-03 15:47:18.380 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.modal.ReactModalHostManager 2024-09-03 15:47:18.381 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.modal.ModalHostShadowNode 2024-09-03 15:47:18.383 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.frescosupport.FrescoBasedReactTextInlineImageViewManager 2024-09-03 15:47:18.383 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.frescosupport.FrescoBasedReactTextInlineImageShadowNode 2024-09-03 15:47:18.385 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.progressbar.ReactProgressBarViewManager 2024-09-03 15:47:18.386 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.progressbar.ProgressBarShadowNode 2024-09-03 15:47:18.388 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.scroll.ReactHorizontalScrollViewManager 2024-09-03 15:47:18.392 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.image.ReactImageManager 2024-09-03 15:47:18.394 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.ReactTextViewManager 2024-09-03 15:47:18.395 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.ReactTextShadowNode 2024-09-03 15:47:18.398 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.scroll.ReactHorizontalScrollContainerViewManager 2024-09-03 15:47:18.399 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.unimplementedview.ReactUnimplementedViewManager 2024-09-03 15:47:18.402 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.scroll.ReactScrollViewManager 2024-09-03 15:47:18.405 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.view.ReactViewManager 2024-09-03 15:47:18.407 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.drawer.ReactDrawerLayoutManager 2024-09-03 15:47:18.409 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.switchview.ReactSwitchManager 2024-09-03 15:47:18.409 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.switchview.ReactSwitchManager$ReactSwitchShadowNode 2024-09-03 15:47:18.411 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.ReactVirtualTextViewManager 2024-09-03 15:47:18.411 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.ReactVirtualTextShadowNode 2024-09-03 15:47:18.412 31768-31870 com.reactnct com.reactnct W Accessing hidden method Landroid/os/Trace;->isTagEnabled(J)Z (unsupported, reflection, allowed) 2024-09-03 15:47:18.412 31768-31870 com.reactnct com.reactnct W Accessing hidden method Landroid/os/Trace;->traceBegin(JLjava/lang/String;)V (unsupported, reflection, allowed) 2024-09-03 15:47:18.412 31768-31870 com.reactnct com.reactnct W Accessing hidden method Landroid/os/Trace;->traceEnd(J)V (unsupported, reflection, allowed) 2024-09-03 15:47:18.412 31768-31870 com.reactnct com.reactnct W Accessing hidden method Landroid/os/Trace;->asyncTraceBegin(JLjava/lang/String;I)V (unsupported, reflection, allowed) 2024-09-03 15:47:18.412 31768-31870 com.reactnct com.reactnct W Accessing hidden method Landroid/os/Trace;->asyncTraceEnd(JLjava/lang/String;I)V (unsupported, reflection, allowed) 2024-09-03 15:47:18.413 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.swiperefresh.SwipeRefreshLayoutManager 2024-09-03 15:47:18.415 31768-31870 cr_WVCFactoryProvider com.reactnct I Loaded version=103.0.5060.71 minSdkVersion=29 isBundle=false multiprocess=true packageId=2 2024-09-03 15:47:18.415 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.ReactRawTextManager 2024-09-03 15:47:18.415 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.text.ReactRawTextShadowNode 2024-09-03 15:47:18.416 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.textinput.ReactTextInputManager 2024-09-03 15:47:18.417 31768-31869 unknown:Vi...rtyUpdater com.reactnct W Could not find generated setter for class com.facebook.react.views.textinput.ReactTextInputShadowNode 2024-09-03 15:47:18.431 31768-31870 cr_LibraryLoader com.reactnct I Successfully loaded native library 2024-09-03 15:47:18.432 31768-31870 cr_CachingUmaRecorder com.reactnct I Flushed 8 samples from 8 histograms. 2024-09-03 15:47:18.434 31768-31881 cr_VariationsUtils com.reactnct I Failed reading seed file "/data/user/0/com.reactnct/app_webview/variations_seed_new" 2024-09-03 15:47:18.434 31768-31881 cr_VariationsUtils com.reactnct I Failed reading seed file "/data/user/0/com.reactnct/app_webview/variations_seed" 2024-09-03 15:47:18.442 31768-31884 TrafficStats com.reactnct D tagSocket(136) with statsTag=0xffffffff, statsUid=-1 2024-09-03 15:47:18.444 31768-31885 TrafficStats com.reactnct D tagSocket(136) with statsTag=0xffffffff, statsUid=-1 2024-09-03 15:47:18.455 31768-31869 ReactNativeJS com.reactnct I Bridgeless mode is enabled 2024-09-03 15:47:18.491 31768-31869 ReactNativeJS com.reactnct I Running "Example" with {"rootTag":11,"initialProps":{},"fabric":true} 2024-09-03 15:47:18.580 31768-31886 TrafficStats com.reactnct D tagSocket(136) with statsTag=0xffffffff, statsUid=-1 2024-09-03 15:47:18.604 31768-31887 TrafficStats com.reactnct D tagSocket(137) with statsTag=0xffffffff, statsUid=-1 2024-09-03 15:47:18.650 31768-31869 CleverTap:...c_deviceID com.reactnct V CleverTapAPI instance = com.clevertap.android.sdk.CleverTapAPI@5c6f83d 2024-09-03 15:47:18.656 31768-31870 CleverTapReact com.reactnct I CleverTap.registerForPush is a no-op in Android 2024-09-03 15:47:18.657 31768-31869 ReactNativeJS com.reactnct I Listeners attached ``` ### Reproducer https://github.com/CleverTap/clevertap-react-native/tree/task/SDK-3736/Research-and-Document-RN-new-architecture-changes/Example ### Screenshots and Videos _No response_
Issue: Author Provided Repro,Resolution: PR Submitted,Platform: Android,Type: New Architecture
low
Critical
2,502,532,510
flutter
[DatePicker] Material3 DatePicker should follow OS regional settings or provide additional setting
### Steps to reproduce 1. Use `showDatePicker` method on Flutter project with material3 2. Set English (USA) language on your OS 3. Change regional settings - first day of week to Monday 4. Open app and open date picker 5. First day of week will be Sunday (locale default - completely ignores regional settings) ### Expected results I would expect to at least follow OS regional setting or provide an method to set first day of week manually. ### Actual results OS regional settings are completely ignored, and datePicker always uses locale default settings which cannot be changed at all and there is no other option to override first day of week manually. ### Code sample <details open><summary>Code sample</summary> ```dart showDatePicker( context: context, firstDate: DateTime.now().subtract(const Duration(days: 365)), lastDate: DateTime.now().add(const Duration(days: 365)), initialDate: DateTime.now(), ) ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> ![datePicker_screenshot](https://github.com/user-attachments/assets/4b9e2918-d9fe-4412-8db9-b05ae1203051) ![regional_settings](https://github.com/user-attachments/assets/edb9b1c3-7c60-4ed8-ac03-58ab21630b25) </details> ### Logs _No response_ ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel stable, 3.24.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-SI) [✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 15.4) [✓] Chrome - develop for the web [✓] Android Studio (version 2024.1) [✓] VS Code (version 1.92.2) [✓] Connected device (6 available) [✓] Network resource ``` </details>
framework,f: material design,f: date/time picker,a: internationalization,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.25
low
Minor
2,502,573,073
pytorch
dynamo (re)compilation issues: shape (1,1), nn.Parameter, mark_dynamic
### 🐛 Describe the bug There are following issues: 1. passing `Parameter` as argument instead of `Tensor` causes recompilation 2. setting `torch._dynamo.mark_dynamic` on `Parameter` has no effect 3. passing tensor of size `(1,1)` causes recompilation with a strange guard failure `- 2 <= L['x'].size()[0]` 4. using `torch._dynamo.mark_dynamic(x, 0, min=0)` causes error `not simple sympy type <class 'NoneType'>` 5. using `torch._dynamo.mark_dynamic(x, 0, min=0, max=65536)` causes error `CONSTRAINTS_VIOLATED` I've checked on 2.4.0 and on nightly ```python import torch import torch.nn as nn from torch import Tensor import logging torch._logging.set_logs(recompiles=True, recompiles_verbose = True) def f(x): return x.sin() + 1.0 # def g(x): # return x**2 compile_args = dict(fullgraph=True, dynamic = True, backend = "inductor", options={'force_same_precision':True, 'disable_cpp_codegen':False, 'trace.graph_diagram':True, "triton.cudagraphs": False}) @torch.compile(**compile_args) def foo(x): y = f(x) # z = g(y) return y shapes = [(2,2), (3,3), (1,1)] for P in [False, True]: for k, shape in enumerate(shapes): print(f'Shape {k}: {shape}') x = torch.ones(shape) if P: x = torch.nn.Parameter(x) for d in range(x.dim()): torch._dynamo.mark_dynamic(x, d) # setting min=0 errors z = foo(x) ``` ``` Shape 0: (2, 2) Shape 1: (3, 3) Shape 2: (1, 1) V0903 12:36:59.429000 140550974269248 torch/_dynamo/guards.py:2609] [0/1] [__recompiles_verbose] Recompiling function foo in /mnt/datagrid/personal/shekhovt/quant/debug/test_compile1.py:17 V0903 12:36:59.429000 140550974269248 torch/_dynamo/guards.py:2609] [0/1] [__recompiles_verbose] triggered by the following guard failure(s): V0903 12:36:59.429000 140550974269248 torch/_dynamo/guards.py:2609] [0/1] [__recompiles_verbose] guard 0 failures: V0903 12:36:59.429000 140550974269248 torch/_dynamo/guards.py:2609] [0/1] [__recompiles_verbose] - 2 <= L['x'].size()[0] # _dynamo/output_graph.py:452 in init_ambient_guards V0903 12:36:59.429000 140550974269248 torch/_dynamo/guards.py:2609] [0/1] [__recompiles_verbose] - 2 <= L['x'].size()[1] # _dynamo/output_graph.py:452 in init_ambient_guards Shape 0: (2, 2) V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] Recompiling function foo in /mnt/datagrid/personal/shekhovt/quant/debug/test_compile1.py:17 V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] triggered by the following guard failure(s): V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] guard 0 failures: V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] - expected type of 'L['x']' to be a tensor type, ' but found <class 'torch.nn.parameter.Parameter'> V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] guard 1 failures: V0903 12:37:02.287000 140550974269248 torch/_dynamo/guards.py:2609] [0/2] [__recompiles_verbose] - expected type of 'L['x']' to be a tensor type, ' but found <class 'torch.nn.parameter.Parameter'> Shape 1: (3, 3) V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] Recompiling function foo in /mnt/datagrid/personal/shekhovt/quant/debug/test_compile1.py:17 V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] triggered by the following guard failure(s): V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] guard 0 failures: V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] - tensor 'L['x']' size mismatch at index 0. expected 2, actual 3 V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] guard 1 failures: V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] - expected type of 'L['x']' to be a tensor type, ' but found <class 'torch.nn.parameter.Parameter'> V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] guard 2 failures: V0903 12:37:05.634000 140550974269248 torch/_dynamo/guards.py:2609] [0/3] [__recompiles_verbose] - expected type of 'L['x']' to be a tensor type, ' but found <class 'torch.nn.parameter.Parameter'> Shape 2: (1, 1) V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] Recompiling function foo in /mnt/datagrid/personal/shekhovt/quant/debug/test_compile1.py:17 V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] triggered by the following guard failure(s): V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] guard 0 failures: V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] - tensor 'L['x']' size mismatch at index 0. expected 3, actual 1 V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] guard 1 failures: V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] - tensor 'L['x']' size mismatch at index 0. expected 2, actual 1 V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] guard 2 failures: V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] - expected type of 'L['x']' to be a tensor type, ' but found <class 'torch.nn.parameter.Parameter'> V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] guard 3 failures: V0903 12:37:08.674000 140550974269248 torch/_dynamo/guards.py:2609] [0/4] [__recompiles_verbose] - expected type of 'L['x']' to be a tensor type, ' but found <class 'torch.nn.parameter.Parameter'> ``` ### Versions Collecting environment information... PyTorch version: 2.5.0.dev20240902+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (GCC) 13.2.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.5 (main, Oct 2 2023, 09:22:39) [GCC 13.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1062-nvidia-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA DGX Display GPU 4: NVIDIA A100-SXM4-40GB Nvidia driver version: 535.183.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: AuthenticAMD Model name: AMD EPYC 7742 64-Core Processor CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 2250,0000 CPU min MHz: 1500,0000 BogoMIPS: 4491.73 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es Virtualization: AMD-V L1d cache: 2 MiB (64 instances) L1i cache: 2 MiB (64 instances) L2 cache: 32 MiB (64 instances) L3 cache: 256 MiB (16 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-127 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] pytorch-triton==3.0.0+dedb7bdf33 [pip3] torch==2.5.0.dev20240902+cu118 [pip3] torchaudio==2.5.0.dev20240902+cu118 [pip3] torchvision==0.20.0.dev20240902+cu118 [conda] Could not collect cc @ezyang @chauhang @penguinwu
good first issue,triaged,oncall: pt2,module: dynamic shapes
low
Critical
2,502,599,408
godot
`NavigationServer3D::query_path` does not find optimal path across multiple aligned navigation meshes.
### Tested versions 4.3-stable ### System information Windows 11 ### Issue description It seems that `NavigationServer3D::query_path` has some issues optimizing/finding longer paths. All individual optimal path segments exist, even across mesh borders. However the full path takes a totally different route that is way longer. I have tried with `simplify_path` `true` and false, and `PATH_POSTPROCESSING_CORRIDORFUNNEL` and `PATH_POSTPROCESSING_EDGECENTERED`. `simplify_path=true` and `PATH_POSTPROCESSING_CORRIDORFUNNEL` produces the best result from all options but it is still far from optimal. ![image](https://github.com/user-attachments/assets/0d239fe7-a038-4cbc-95d2-a521183cdb69) ![image](https://github.com/user-attachments/assets/8de30e4b-abe5-4416-91b2-2364e3ab4d79) ![image](https://github.com/user-attachments/assets/f705a186-33a0-424f-a03d-61c5d62b8f8f) ![image](https://github.com/user-attachments/assets/32a0033d-a336-4255-9452-46c65173f515) ![image](https://github.com/user-attachments/assets/2352d6ab-f360-4c94-8308-419f06640fd9) ![image](https://github.com/user-attachments/assets/adcd4abc-ffd0-4f6f-91fe-8959958b3c6d) ![image](https://github.com/user-attachments/assets/3234e194-ef84-49cb-9f6e-f3bab48eab81) ### Steps to reproduce ``` func find_path() -> void: var start: Vector3 = %Start.position var end: Vector3 = %End.position var query := NavigationPathQueryParameters3D.new() query.start_position = start query.target_position = end query.map = get_world_3d().navigation_map query.simplify_path = true var result := NavigationPathQueryResult3D.new() NavigationServer3D.query_path(query, result) %Path.draw_path(result.path) ``` ### Minimal reproduction project (MRP) -
enhancement,topic:navigation,performance
low
Minor
2,502,625,668
vscode
Deprecated extension recommendation: Volar
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.92.2 - OS Version: Win10 Steps to Reproduce: 1. Open a vue3 project 2. Vscode recommends Volar ![image](https://github.com/user-attachments/assets/e0dbb6cb-2e32-4db0-a472-e65225fe5b5c) ![image](https://github.com/user-attachments/assets/8d65a819-7d32-4751-96a8-289fbcc56dea)
feature-request,extensions
low
Critical
2,502,629,910
next.js
Using MDXProvider with mdx-components.tsx
### Link to the code that reproduces this issue https://github.com/ProchaLu/next-js-mdx-provider [CodeSandbox](https://codesandbox.io/p/devbox/github/ProchaLu/next-js-mdx-provider/tree/main/?file=%2Fapp%2FMDXComponent.tsx) ### To Reproduce 1. Clone repro, `git clone https://github.com/ProchaLu/next-js-mdx-provider` 2. install dependencies 3. run development server ### Current vs. Expected behavior I'm trying to use both `mdx-components.tsx` and `MDXProvider` together (like [the nested `MDXProvider` pattern](https://mdxjs.com/docs/using-mdx/#:~:text=When%20MDXProviders%20are%20nested%2C%20their%20components%20are%20merged.%20Take%20this%20example%3A)). Our constraints are as follows: 1. our app is large with multiple areas - different areas should receive different MDX `components` 2. we are trying to avoid [prop drilling - having to pass `components={props.components}` in every MDX file](https://nextjs.org/docs/app/building-your-application/configuring/mdx#local-styles-and-components) where we import another MDX file, eg. trying to avoid this: ```mdx import Child from './child.mdx' {/* Trying to avoid this */} <Child components={props.components} /> ``` ### Current Behavior When using the MDXProvider in `MDXComponent.tsx` to provide custom components (h3 and h4), these components are not applied to the MDX content. Instead, only the global components defined in `mdx-components.tsx` (for h1 and h2) are applied. `mdx-components.tsx` ```tsx const components = { h1: ({ children, ...props }: HTMLAttributes<HTMLHeadElement>) => ( <h1 style={{ color: 'tomato' }} {...props}> {children} </h1> ), h2: ({ children, ...props }: HTMLAttributes<HTMLHeadElement>) => ( <h2 style={{ color: 'blue' }} {...props}> {children} </h2> ), } satisfies MDXComponents; declare global { type MDXProvidedComponents = typeof components; } // eslint-disable-next-line no-undef export function useMDXComponents(): MDXProvidedComponents { return components; } ``` `MDXComponent.tsx` ```tsx 'use client'; import { MDXProvider } from '@mdx-js/react'; import { MDXComponents } from 'mdx/types'; import { HTMLAttributes } from 'react'; import Content from './message.mdx'; const components = { h3: ({ children, ...props }: HTMLAttributes<HTMLHeadElement>) => ( <h3 style={{ color: 'purple' }} {...props}> {children} </h3> ), h4: ({ children, ...props }: HTMLAttributes<HTMLHeadElement>) => ( <h4 style={{ color: 'yellow' }} {...props}> {children} </h4> ), } satisfies MDXComponents; export default function MDXComponent() { return ( <MDXProvider components={components}> <Content /> </MDXProvider> ); } ``` <img width="339" alt="Screenshot 2024-09-03 at 14 04 07" src="https://github.com/user-attachments/assets/903d3e84-0033-40d7-a7fe-ef7f4b894daf"> ### Expected Behavior The `MDXProvider` in `MDXComponent.tsx` should apply its locally defined custom components (h3 and h4) to the MDX content. - h1 and h2 should get their styles from the global `mdx-components.tsx`. - h3 and h4 should get their styles from the local `MDXProvider` in `MDXComponent.tsx`. This would allow for a more flexible and modular approach where different sections of the application can have different MDX component configurations without the need for prop drilling or defining all components globally. However, it appears that this is not supported in the Next.js MDX integration: - https://github.com/vercel/next.js/issues/54212#issuecomment-1803973892 - https://github.com/vercel/next.js/pull/69609 Are there any suggestions for providing custom MDX `components` in different app areas without manually prop drilling? ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.5.0: Wed May 1 20:19:05 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8112 Available memory (MB): 16384 Available CPU cores: 8 Binaries: Node: 20.16.0 npm: 10.8.1 Yarn: N/A pnpm: 9.4.0 Relevant Packages: next: 14.2.7 // Latest available version is detected (14.2.7). eslint-config-next: N/A react: 18.2.0 react-dom: 18.2.0 typescript: 5.5.2 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Markdown (MDX) ### Which stage(s) are affected? (Select all that apply) next dev (local), next build (local), next start (local) ### Additional context _No response_
bug,Markdown (MDX)
low
Major
2,502,636,704
tauri
[bug] "this and base files have different roots" when building android apk
### Describe the bug cannot build android apk file ### Reproduction ``` cargo install create-tauri-app cargo create-tauri-app --rc pnpm tauri android dev ``` ### Expected behavior _No response_ ### Full `tauri info` output ```text [✔] Environment - OS: Windows 10.0.22621 x86_64 (X64) ✔ WebView2: 127.0.2651.105 ✔ MSVC: Visual Studio Professional 2022 ✔ rustc: 1.77.1 (7cf61ebde 2024-03-27) ✔ cargo: 1.77.1 (e52e36006 2024-03-26) ✔ rustup: 1.27.0 (bbb9276d2 2024-03-08) ✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default) - node: 20.11.1 - pnpm: 9.1.0 - yarn: 1.22.22 - npm: 10.2.4 [-] Packages - tauri 🦀: 2.0.0-rc.8 - tauri-build 🦀: 2.0.0-rc.7 - wry 🦀: 0.42.0 - tao 🦀: 0.29.1 - @tauri-apps/api : 2.0.0-rc.4 - @tauri-apps/cli : 2.0.0-rc.10 [-] Plugins - tauri-plugin-shell 🦀: 2.0.0-rc.3 - @tauri-apps/plugin-shell : 2.0.0-rc.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - framework: React - bundler: Vite ``` ### Stack trace ```text C:\Users\admin\AppData\Roaming\npm\pnpm.cmd run dev_android > tauri_learn@0.1.0 dev_android D:\WorkSpace\tauri_learn > tauri android dev Info Detected connected device: device (sdk_gphone_x86) with target "i686-linux-android" Running BeforeDevCommand (`pnpm dev`) > tauri_learn@0.1.0 dev D:\WorkSpace\tauri_learn > vite VITE v5.4.2 ready in 178 ms ➜ Local: http://localhost:1420/ Compiling tauri v2.0.0-rc.8 Compiling wry v0.42.0 Compiling tauri-plugin-shell v2.0.0-rc.3 Compiling tauri_learn v0.1.0 (D:\WorkSpace\tauri_learn\src-tauri) Compiling tauri-runtime-wry v2.0.0-rc.7 Finished dev [unoptimized + debuginfo] target(s) in 8.84s Info symlinking lib "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" in jniLibs dir "D:\\WorkSpace\\tauri_learn\\src-tauri\\gen/android\\app/src/main/jniLibs/x86" Info "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" requires shared lib "libandroid.so" Info "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" requires shared lib "libdl.so" Info "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" requires shared lib "liblog.so" Info "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" requires shared lib "libm.so" Info "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" requires shared lib "libc.so" Info symlink at "D:\\WorkSpace\\tauri_learn\\src-tauri\\gen/android\\app/src/main/jniLibs/x86\\libtauri_learn_lib.so" points to "D:\\WorkSpace\\tauri_learn\\src-tauri\\target\\i686-linux-android\\debug\\libtauri_learn_lib.so" e: Daemon compilation failed: null java.lang.Exception at org.jetbrains.kotlin.daemon.common.CompileService$CallResult$Error.get(CompileService.kt:69) at org.jetbrains.kotlin.daemon.common.CompileService$CallResult$Error.get(CompileService.kt:65) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.compileWithDaemon(GradleKotlinCompilerWork.kt:244) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.compileWithDaemonOrFallbackImpl(GradleKotlinCompilerWork.kt:175) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:135) at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction.execute(GradleCompilerRunnerWithWorkers.kt:73) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:174) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:195) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:128) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:170) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:267) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:131) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:136) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:165) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:134) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:48) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:842) Caused by: java.lang.IllegalArgumentException: this and base files have different roots: C:\Users\admin\.cargo\registry\src\index.crates.io-6f17d22bba15001f\tauri-2.0.0-rc.8\mobile\android\src\main\java\app\tauri\annotation\ActivityCallback.kt and D:\WorkSpace\tauri_learn\src-tauri\gen\android. at kotlin.io.FilesKt__UtilsKt.toRelativeString(Utils.kt:117) at kotlin.io.FilesKt__UtilsKt.relativeTo(Utils.kt:128) at org.jetbrains.kotlin.incremental.storage.RelocatableFileToPathConverter.toPath(RelocatableFileToPathConverter.kt:22) at org.jetbrains.kotlin.incremental.storage.ComplementarySourceFilesMap.get(ComplementarySourceFilesMap.kt:22) at org.jetbrains.kotlin.incremental.AbstractIncrementalCache.getComplementaryFilesRecursive(AbstractIncrementalCache.kt:227) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.doCompile(IncrementalCompilerRunner.kt:455) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compileImpl(IncrementalCompilerRunner.kt:400) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compileNonIncrementally(IncrementalCompilerRunner.kt:281) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compile(IncrementalCompilerRunner.kt:125) at org.jetbrains.kotlin.daemon.CompileServiceImplBase.execIncrementalCompiler(CompileServiceImpl.kt:657) at org.jetbrains.kotlin.daemon.CompileServiceImplBase.access$execIncrementalCompiler(CompileServiceImpl.kt:105) at org.jetbrains.kotlin.daemon.CompileServiceImpl.compile(CompileServiceImpl.kt:1624) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at java.rmi/sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:360) at java.rmi/sun.rmi.transport.Transport$1.run(Transport.java:200) at java.rmi/sun.rmi.transport.Transport$1.run(Transport.java:197) at java.base/java.security.AccessController.doPrivileged(AccessController.java:712) at java.rmi/sun.rmi.transport.Transport.serviceCall(Transport.java:196) at java.rmi/sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:587) at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828) at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:705) at java.base/java.security.AccessController.doPrivileged(AccessController.java:399) at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:704) ... 3 more Failed to compile with Kotlin daemon: java.lang.Exception at org.jetbrains.kotlin.daemon.common.CompileService$CallResult$Error.get(CompileService.kt:69) at org.jetbrains.kotlin.daemon.common.CompileService$CallResult$Error.get(CompileService.kt:65) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.compileWithDaemon(GradleKotlinCompilerWork.kt:244) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.compileWithDaemonOrFallbackImpl(GradleKotlinCompilerWork.kt:175) at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:135) at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction.execute(GradleCompilerRunnerWithWorkers.kt:73) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:174) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:195) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:128) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:170) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:267) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:131) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:136) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:165) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:134) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:48) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:842) Caused by: java.lang.IllegalArgumentException: this and base files have different roots: C:\Users\admin\.cargo\registry\src\index.crates.io-6f17d22bba15001f\tauri-2.0.0-rc.8\mobile\android\src\main\java\app\tauri\annotation\ActivityCallback.kt and D:\WorkSpace\tauri_learn\src-tauri\gen\android. at kotlin.io.FilesKt__UtilsKt.toRelativeString(Utils.kt:117) at kotlin.io.FilesKt__UtilsKt.relativeTo(Utils.kt:128) at org.jetbrains.kotlin.incremental.storage.RelocatableFileToPathConverter.toPath(RelocatableFileToPathConverter.kt:22) at org.jetbrains.kotlin.incremental.storage.ComplementarySourceFilesMap.get(ComplementarySourceFilesMap.kt:22) at org.jetbrains.kotlin.incremental.AbstractIncrementalCache.getComplementaryFilesRecursive(AbstractIncrementalCache.kt:227) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.doCompile(IncrementalCompilerRunner.kt:455) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compileImpl(IncrementalCompilerRunner.kt:400) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compileNonIncrementally(IncrementalCompilerRunner.kt:281) at org.jetbrains.kotlin.incremental.IncrementalCompilerRunner.compile(IncrementalCompilerRunner.kt:125) at org.jetbrains.kotlin.daemon.CompileServiceImplBase.execIncrementalCompiler(CompileServiceImpl.kt:657) at org.jetbrains.kotlin.daemon.CompileServiceImplBase.access$execIncrementalCompiler(CompileServiceImpl.kt:105) at org.jetbrains.kotlin.daemon.CompileServiceImpl.compile(CompileServiceImpl.kt:1624) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at java.rmi/sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:360) at java.rmi/sun.rmi.transport.Transport$1.run(Transport.java:200) at java.rmi/sun.rmi.transport.Transport$1.run(Transport.java:197) at java.base/java.security.AccessController.doPrivileged(AccessController.java:712) at java.rmi/sun.rmi.transport.Transport.serviceCall(Transport.java:196) at java.rmi/sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:587) at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828) at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:705) at java.base/java.security.AccessController.doPrivileged(AccessController.java:399) at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:704) ... 3 more Using fallback strategy: Compile without Kotlin daemon Try ./gradlew --stop if this issue persists. node:internal/modules/cjs/loader:1147 throw err; ^ Error: Cannot find module 'D:\WorkSpace\tauri_learn\src-tauri\tauri' at Module._resolveFilename (node:internal/modules/cjs/loader:1144:15) at Module._load (node:internal/modules/cjs/loader:985:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12) at node:internal/main/run_main_module:28:49 { code: 'MODULE_NOT_FOUND', requireStack: [] } Node.js v20.11.1 exception: c:\Users\admin\.cargo\registry\src\index.crates.io-6f17d22bba15001f\tauri-2.0.0-rc.8\mobile\android\src\main\java\app\tauri\plugin\PluginMethodData.kt:11:23: warning: parameter 'methodDecorator' is never used exception: val method: Method, methodDecorator: Command exception: ^ FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:rustBuildX86Debug'. > A problem occurred starting process 'command 'D:\Program Files2\nodejs\node.exe.cmd'' * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. > Get more help at https://help.gradle.org. BUILD FAILED in 13s  ELIFECYCLE  Command failed with exit code 4294967295. Error Failed to assemble APK: command ["D:\\WorkSpace\\tauri_learn\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "D:\\WorkSpace\\tauri_learn\\src-tauri\\gen/android"] exited with code 1: command ["D:\\WorkSpace\\tauri_learn\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "D:\\WorkSpace\\tauri_learn\\src-tauri\\gen/android"] exited with code 1  ELIFECYCLE  Command failed with exit code 1. Process finished with exit code 1 ``` ### Additional context android sdk build-tools35 - 34.0.0 - 33.0.2 - 33.0.1 - 30.0.3 ndk - 25.2.9519 android sdk command-line tools (latest)
type: bug,status: needs triage,platform: Android
low
Critical
2,502,673,673
PowerToys
Mouse Without Borders is not working
### Microsoft PowerToys version 0.83.0 ### Installation method Microsoft Store ### Running as admin Yes ### Area(s) with issue? Mouse Without Borders ### Steps to reproduce Refresh connections to use MWB ### ✔️ Expected Behavior MWB works ### ❌ Actual Behavior MWB doesn't work ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,502,756,536
flutter
Better error message for not connecting to Android debug device - WIFI debugging
### Steps to reproduce 1. Connect a physical Android device to debug 2. Leave it there unattended for some time until it shows on `flutter doctor -v` that it is offline (maybe only happens on wifi debugging?) 3. Try to debug a project with `flutter run -v` 4. See the output below (logs). About point _two_ here, not sure this is exactly when you can see this output. Look https://github.com/Dart-Code/Dart-Code/issues/5245 for more info. It happens to me whenever I have left my phone lock the screen by itself and have not used it for some time. If I lock it manually or recently this does not happen. ### Expected results `Device is offline` message. Or some better message handling when not in verbose mode. ### Actual results `Connection closed before full header was received` message _only in verbose mode_. ### Code sample Not important. ### Screenshots or Video Not important. ### Logs <details open><summary>Logs</summary> ```console [ +1 ms] Latest build already installed. [ ] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:42307 shell -x logcat -v time -t 1 [ +355 ms] --------- beginning of system 09-02 17:13:23.818 I/wpa_supplicant( 1637): Heartbeat 62 [ +13 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:42307 shell am start -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -f 0x20000000 --ez enable-dart-profiling true --ez enable-checked-mode true --ez verify-entry-points true br.inf.sunsoft.produtores/br.inf.sunsoft.produtores.MainActivity [ +157 ms] Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x20000000 cmp=br.inf.sunsoft.produtores/.MainActivity (has extras) } [ ] Waiting for VM Service port to be available... [+1327 ms] VM Service URL on device: http://127.0.0.1:45599/PSAwNkkAsUk=/ [ +1 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:42307 forward tcp:0 tcp:45599 [ +61 ms] 50529 [ +1 ms] Forwarded host port 50529 to device port 45599 for VM Service [ +3 ms] Caching compiled dill [ +78 ms] Connecting to service protocol: http://127.0.0.1:50529/PSAwNkkAsUk=/ [+31591 ms] Fail to connect to service protocol: http://127.0.0.1:50529/PSAwNkkAsUk=/: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:50529/PSAwNkkAsUk=/ws [ +2 ms] Error connecting to the service protocol: failed to connect to http://127.0.0.1:50529/PSAwNkkAsUk=/ [ +21 ms] "flutter run" took 66.603ms. [ +176 ms] #0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3) #1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:874:9) <asynchronous suspension> #2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1408:27) <asynchronous suspension> #3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19) <asynchronous suspension> #4 CommandRunner.runCommand (package:args/command_runner.dart:212:13) <asynchronous suspension> #5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:420:9) <asynchronous suspension> #6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19) <asynchronous suspension> #7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5) <asynchronous suspension> #8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9) <asynchronous suspension> #9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19) <asynchronous suspension> #10 main (package:flutter_tools/executable.dart:93:3) <asynchronous suspension> [ +46 ms] I/flutter (16413): Drift: Sent SELECT * FROM sqlite_master; with args [] [ +2 ms] I/flutter (16413): Drift: Sent select 1 with args [] [ +211 ms] ensureAnalyticsSent: 253ms [ +1 ms] Running 2 shutdown hooks [ +5 ms] Shutdown hooks complete [ +26 ms] exiting with code 2 ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.22631.4037], locale pt-BR) • Flutter version 3.24.1 on channel stable at C:\Users\felip_0vh5fa6\.puro\envs\stable\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 5874a72aa4 (2 weeks ago), 2024-08-20 16:46:00 -0500 • Engine revision c9b9d5780d • Dart version 3.5.1 • DevTools version 2.37.2 [√] Windows Version (Installed version of Windows is version 10 or higher) [√] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk • Platform android-34, build-tools 34.0.0 • Java binary at: C:\Users\felip_0vh5fa6\AppData\Local\Programs\Android Studio\jbr\bin\java • Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.6) • Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community • Visual Studio Community 2022 version 17.10.35201.131 • Windows 10 SDK version 10.0.22621.0 [√] Android Studio (version 2024.1) • Android Studio at C:\Users\felip_0vh5fa6\AppData\Local\Programs\Android Studio • Flutter plugin can be installed from: https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314) [√] VS Code (version 1.92.2) • VS Code at C:\Users\felip_0vh5fa6\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.92.0 [√] VS Code (version 1.88.0-insider) • VS Code at C:\Users\felip_0vh5fa6\AppData\Local\Programs\Microsoft VS Code Insiders • Flutter extension version 3.64.0 [√] Connected device (4 available) • SM G780F (mobile) • 192.168.200.50:41835 • android-arm64 • Android 13 (API 33) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4037] • Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.114 • Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42 ! Device 192.168.200.50:39975 is offline. [√] Network resources • All expected network resources are available. • No issues found! ``` </details>
c: crash,platform-android,tool,a: debugging,c: proposal,P2,team-android,triaged-android
low
Critical
2,502,792,256
godot
Android USB Remote Debug Session not starting with Godot 4.3 stable
### Tested versions - v4.3.stable.official [77dcf97d8] ### System information Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 4070 (NVIDIA; 32.0.15.5585) - 13th Gen Intel(R) Core(TM) i5-13400F (16 Threads) ### Issue description Hey, so Godot isn't starting up a remote debug session with the one click remote install. I have followed all the steps to create android APKs and set up remote debugging and that's all working. But when I click "Remote Debug" in the top-right, I choose my device and it'll build and install, but when it runs Godot won't do anything. It won't show to remote panel, or give me access to the game running on my device at all, so no std logs or anything either. I have activated and deactivated "Debug -> Deploy with Remote Debug" and neither seems to make a difference. I also followed steps spoke about in this issue: https://github.com/godotengine/godot/issues/91359 like closing the java process and running Godot as admin, but unfortunately nothing's worked yet. I've also revoked debugging permission on my device and re-given permission but still nothing. It just seems to be Godot doesn't know the game is running on the device. This also doesn't seem anything to do with my project as I've attached just a tiny 1-scene project and it still doesn't debug correctly on my machine. Ouput: ``` 0 param: --remote-debug 1 param: tcp://localhost:6007 2 param: --xr_mode_regular 3 param: --use_immersive Installing to device (please wait...): Google Pixel 6 --- Device API >= 21; debugging over USB --- --- DEVICE API >= 21; DEBUGGING OVER USB --- Reverse result: 0 ``` Java version: ``` PS C:\Users\jacob> java --version openjdk 17.0.11 2024-04-16 OpenJDK Runtime Environment Temurin-17.0.11+9 (build 17.0.11+9) OpenJDK 64-Bit Server VM Temurin-17.0.11+9 (build 17.0.11+9, mixed mode, sharing) ``` ### Steps to reproduce - dl test-remote project - connect android phone (make sure it's all set up correctly) - run "Remote Debug" in the top-right - See that Godot doesn't know it's running and not showing the Remote panel ### Minimal reproduction project (MRP) [test-remote.zip](https://github.com/user-attachments/files/16848696/test-remote.zip)
bug,platform:android,topic:editor
low
Critical
2,502,840,480
electron
[Feature Request]: `ready-to-show` event on `webContents`
### Preflight Checklist - [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success. ### Problem Description I moved from `BrowserWindow` to a `BaseWindow` with one or more `webContents` (via `WebContentsView`). I'd like to know when each of the `webContents` are `ready-to-show`. ### Proposed Solution I propose this API: ```js webContents.on("ready-to-show", () => console.log("ready-to-show")); ``` ### Alternatives Considered * Having `ready-to-show` on the `BaseWindow` (see #42291). — I think having it on the `webContents` makes more sense as a `BaseWindow` can contain multiple `webContents`. * Using `did-finish-load`. — Workaround for now. ### Additional Information `ready-to-show` already seems to be available on `webContents` internally: https://github.com/electron/electron/blob/c41a28d7c850df00118b31aec67b0941cd0471eb/lib/browser/api/web-contents.ts#L827 For some reason, the following doesn't work though: ```js webContents.on("ready-to-show", () => console.log("ready-to-show")); ``` The event also isn't documented.
enhancement :sparkles:
low
Minor
2,502,855,520
go
proposal: sync: add PLocalCache
### Proposal Details ## The issue High-performance code which scales linearly with the number of CPU cores usually needs per-CPU caches for holding some per-CPU state in order to avoid costly inter-CPU synchronization. The state can be re-computed at any time, but the computation may take additional CPU time and other resources, so it is more efficient to cache the computed state per each CPU core and then re-use it. The `sync.Pool` can be used as per-CPU cache, but it has the following issues in this context: - `sync.Pool.Get()` tries stealing an object from other CPUs if the object is missing in the current P. This leads to costly inter-CPU synchronization. The cost of this synchronization increases with the number of available CPU cores. - `sync.Pool.Put()` may store multiple objects at the same P. This leads to excess memory usage when at most one object is needed per P. `sync.Pool.Put()` also triggers expensive inter-CPU synchronization if P already contains an object. - `sync.Pool` may drop cached objects at every GC cycle, so the caller needs to spend additional CPU time for re-creating the object. ## The solution To add `sync.PLocalCache` struct with the following API: ```go // PLocalCache caches per-P objects. // // Every P may have at most one object at any moment. // // PLocalCache is useful for implementing linearly scalable // CPU-bound algorithms on multi-CPU architectures. type PLocalCache struct { ... } // Get returns P-local object from c. // // nil is returned if there is no cached object for the current P. // // It is guaranteed that the returned object can be accessed only by the current goroutine, // so there is no need in synchronizing access to it. // // Return the object to the cache via Put() if it needs to be re-used next time. func (c *PLocalCache) Get() any { ... } // Put puts v to P-local storage at c. // // v mustn't be used after Put() call. // // There is no guarantee that the subsequent Get() call returns v. func (c *PLocalCache) Put(v any) { ... } ``` ## Implementation details `sync.PLocalCache` may be implemented in the way similar to `sync.Pool`, but without the following abilities: - Stealing objects from other Ps. If the object is missing in P-local storage, then just return nil. - Ability to put multiple objects per every P-local storage. If `Put()` is called on the storage with already existing P-local object, then just ignore the new object. - Periodic cleanup on every GC cycle. It is guaranteed that the number of cached objects in `sync.PLocalCache` doesn't exceed `GOMAXPROCS`, e.g. it is bounded, and it is expected that the user continuously accesses the cached objects. So there is little sense in periodic cleanup of the cache. All the cached objects will be removed after the corresponding `sync.PLocalCache` is destroyed by garbage collector. The property of having at most one P-local object in the cache narrows down the applicability of the `Get() ... Put()` to CPU-bound code without context switches (e.g. without IO, expensive syscalls and CGO calls). This minimizes chances of context switch during the execution of the code between `Get()` and `Put()`, so the cached objects will be successfully re-used by this code. For example, it is great to use `sync.PLocalCache` for scalable random number generator with per-P (aka per-CPU) state. It is also great to use `sync.PLocalCache` for various CPU-bound parsers, encoders and compressors with some non-trivial state, which can be cached in the P-local cache. On the other hand, if the chances of context switch between `Get()` and `Put()` calls are high, then this increases chances that `Get()` will return `nil` most of the time. This forces the user's code to spend additional CPU time on object re-creation. The re-created object will be dropped most of the time on `Put()` call, since there are high chances that there is another P-local object is put in the cache by concurrently running goroutines. In such cases it is better to use `sync.Pool` instead of `sync.PLocalCache`. ## Example usage ```go type ScalableParser struct { state sync.PLocalCache } func (p *ScalableParser) Parse(s string) result { v := p.state.Get() if v == nil { v = newScalableParserState() } ps := v.(*scalableParserState) r := ps.Parse(s) p.state.Put(v) return r } ``` See also https://github.com/golang/go/issues/65104 . Now I think it is better to provide a separate entity with clear semantics than complicating the semantics of `sync.Pool` and/or trying to efficiently cover multiple different cases with `sync.Pool`. ## Generics It may be good providing generic-based `sync.PLocalCache[T any]`, but it is OK to provide non-generic implementation to be consistent with `sync.Pool`.
Proposal
low
Major
2,502,870,926
godot
`lerp` accepts Variant as a `weight`, should be float
### Tested versions Reproducible in v4.3.stable.official [77dcf97d8] or any 4.x ### System information Windows 11 ### Issue description `lerp` accepts any Variant as a weight, altough it only supports float. Giving other values causes the function to fail silently. This problem was mentioned in #64332 as well, but I think it is a separate issue. ![kuva](https://github.com/user-attachments/assets/4e4a616c-51ee-4b36-a50d-2fa080d0beee) The function is registered with a macro `FUNCBINDVR3`, which doesn't offer flexibility around the argument types. Possible solutions: inline the contents of the macro and write correct `get_argument_type(int p_arg)` function. Or modify the macro to relay the correct argument types. ### Steps to reproduce Write ``` lerp(0.0, 10.0, "hello") ``` or just see the reference... ### Minimal reproduction project (MRP) N/A
enhancement,topic:core,breaks compat
low
Minor
2,502,932,337
stable-diffusion-webui
[Feature Request]: example: extension "sd-webui-birefnet" for removing background
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What would your feature do ? so its a fast and great extension for removing background in very good qualitiy it works so fare with a1111 and webui but if i have file names usual "image(103).png" -> the file after removing background is "image103.png" **also upscale and all other stuff, the name is not the same if i have brackets !** ### Proposed workflow ![grafik](https://github.com/user-attachments/assets/81e86972-4660-435b-a47c-0edd65377cda) ### Additional information _No response_
enhancement
low
Minor
2,502,987,832
ui
[feat]: `react-big-calendar` for calender options and event handling
### Feature description [react-big-calendar](https://github.com/jquense/react-big-calendar) is a very good calender library that can enhance and work well with calender enhancements. ### Affected component/components Calender ### Additional Context Additional details here... ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues and PRs
area: request
low
Minor
2,503,010,508
flutter
Adding `--machine` to `flutter run` makes some terminating processes not terminate
### Steps to reproduce 1. Connect a physical Android device with wifi-debugging 2. Make sure it is idle for enough time that it shows "offline" (see https://github.com/flutter/flutter/issues/154545 and https://github.com/Dart-Code/Dart-Code/issues/5245) 3. Try to run with `flutter run --machine -v` ### Expected results For it to terminate as it did without the `--machine`. ### Actual results Not terminating. ### Code sample Not relevant. ### Screenshots or Video Not relevant. ### Logs <details open><summary>Logs</summary> ```console [ +1 ms] Installing APK. [{"event":"app.progress","params":{"appId":"61ae4db2-259b-4542-8b88-cee6cc9c9619","id":"1","progressId":null,"message":"Installing build\\app\\outputs\\flutter-apk\\app-debug.apk...","finished":false}}] [ +1 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:41835 install -t -r C:\Users\felip_0vh5fa6\SUNSIG\trunk\Delphiap\Desenvol\Sunsoft Flutter\Produtores\produtores\build\app\outputs\flutter-apk\app-debug.apk [+66147 ms] Performing Streamed Install Success [{"event":"app.progress","params":{"appId":"61ae4db2-259b-4542-8b88-cee6cc9c9619","id":"1","progressId":null,"finished":true}}] [ +2 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:41835 shell echo -n 5101296c28a4d3b4bcec0391818f743d79c04e91 > /data/local/tmp/sky.br.inf.sunsoft.produtores.sha1 [ +141 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:41835 shell -x logcat -v time -t 1 [ +490 ms] --------- beginning of main 09-03 09:32:57.448 D/NetdEventListenerService(27497): DNS Requested by : 101, 1000 [ +16 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:41835 shell am start -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -f 0x20000000 --ez enable-dart-profiling true --ez enable-checked-mode true --ez verify-entry-points true br.inf.sunsoft.produtores/br.inf.sunsoft.produtores.MainActivity [ +215 ms] Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x20000000 cmp=br.inf.sunsoft.produtores/.MainActivity (has extras) } [ ] Waiting for VM Service port to be available... [+2922 ms] VM Service URL on device: http://127.0.0.1:37203/3ApRUPWnk_w=/ [ +1 ms] executing: C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.200.50:41835 forward tcp:0 tcp:37203 [ +134 ms] 14918 [ ] Forwarded host port 14918 to device port 37203 for VM Service [ +8 ms] Caching compiled dill [ +117 ms] Connecting to service protocol: http://127.0.0.1:14918/3ApRUPWnk_w=/ [+31643 ms] Fail to connect to service protocol: http://127.0.0.1:14918/3ApRUPWnk_w=/: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:14918/3ApRUPWnk_w=/ws [ +2 ms] Error connecting to the service protocol: failed to connect to http://127.0.0.1:14918/3ApRUPWnk_w=/ [{"event":"app.stop","params":{"appId":"61ae4db2-259b-4542-8b88-cee6cc9c9619"}}] [ +13 ms] I/flutter (30308): Drift: Sent SELECT * FROM sqlite_master; with args [] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.22631.4037], locale pt-BR) • Flutter version 3.24.1 on channel stable at C:\Users\felip_0vh5fa6\.puro\envs\stable\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 5874a72aa4 (2 weeks ago), 2024-08-20 16:46:00 -0500 • Engine revision c9b9d5780d • Dart version 3.5.1 • DevTools version 2.37.2 [√] Windows Version (Installed version of Windows is version 10 or higher) [√] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at C:\Users\felip_0vh5fa6\AppData\Local\Android\sdk • Platform android-34, build-tools 34.0.0 • Java binary at: C:\Users\felip_0vh5fa6\AppData\Local\Programs\Android Studio\jbr\bin\java • Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.6) • Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community • Visual Studio Community 2022 version 17.10.35201.131 • Windows 10 SDK version 10.0.22621.0 [√] Android Studio (version 2024.1) • Android Studio at C:\Users\felip_0vh5fa6\AppData\Local\Programs\Android Studio • Flutter plugin can be installed from: https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314) [√] VS Code (version 1.92.2) • VS Code at C:\Users\felip_0vh5fa6\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.92.0 [√] VS Code (version 1.88.0-insider) • VS Code at C:\Users\felip_0vh5fa6\AppData\Local\Programs\Microsoft VS Code Insiders • Flutter extension version 3.64.0 [√] Connected device (4 available) • SM G780F (mobile) • 192.168.200.50:41835 • android-arm64 • Android 13 (API 33) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4037] • Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.114 • Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42 ! Device 192.168.200.50:39975 is offline. [√] Network resources • All expected network resources are available. • No issues found! ``` </details>
platform-android,tool,P2,team-tool,triaged-tool
low
Critical
2,503,041,115
pytorch
`torch.unique` behaves strange for large input arrays on Windows
### 🐛 Describe the bug We been having problem with updating our CI in torchmetrics (https://github.com/Lightning-AI/torchmetrics/pull/2671) to the newest version torch. A particular test that utilizes `torch.unique` is constantly failing on Windows for version v2.4.0 of Pytorch. To reproduce: ```python import torch len(torch.randint(low=0, high=4, size=(100000, )).unique()) ``` will return that the length is much larger of the unique array than the expected four values the array consist of. On my local Windows machine this breaking point where `torch.unique` is no longer working as expected happens exactly at size 32767 and 32768 e.g. ```python import torch print(len(torch.randint(low=0, high=4, size=(100000, )).unique()) == 4) # True print(len(torch.randint(low=0, high=4, size=(100000, )).unique()) == 4) # False ``` 32768 corresponds to 2^15 so I expect there is some limitation in place here. ### Versions Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Enterprise GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: True CUDA runtime version: 12.3.107 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU Nvidia driver version: 546.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=2400 DeviceID=CPU0 Family=198 L2CacheSize=11776 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2400 Name=13th Gen Intel(R) Core(TM) i7-13700H ProcessorType=3 Revision= Versions of relevant libraries: [pip3] mypy==1.7.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] pytorch-lightning==1.9.5 [pip3] torch==2.4.0+cu121 [pip3] torch-fidelity==0.3.0 [pip3] torchaudio==2.4.0 [pip3] torcheval==0.0.7 [pip3] torchmetrics==1.4.0.dev0 [pip3] torchvision==0.16.0+cu121 [conda] numpy 1.26.4 pypi_0 pypi [conda] pytorch-lightning 1.9.5 pypi_0 pypi [conda] torch 2.4.0+cu121 pypi_0 pypi [conda] torch-fidelity 0.3.0 pypi_0 pypi [conda] torchaudio 2.4.0 pypi_0 pypi [conda] torcheval 0.0.7 pypi_0 pypi [conda] torchmetrics 1.4.0.dev0 pypi_0 pypi [conda] torchvision 0.16.0+cu121 pypi_0 pypi cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
needs reproduction,module: windows,triaged,module: correctness (silent)
low
Critical
2,503,043,392
bitcoin
intermittent issue in wallet_keypool.py: assert_equal(nodes[0].getwalletinfo()["unlocked_until"], 0) AssertionError: not(1725366101 == 0)
ERROR: type should be string, got "https://cirrus-ci.com/task/5492198311985152?logs=ci#L4100\r\n\r\n\r\n```\r\n node0 2024-09-03T12:21:40.411672Z [httpworker.0] [src/rpc/server.cpp:586] [RPCRunLater] [rpc] queue run of timer lockwallet(default_wallet) in 1 seconds (using HTTP) \r\n node0 2024-09-03T12:21:40.416379Z [http] [src/httpserver.cpp:304] [http_request_cb] [http] Received a POST request for / from 127.0.0.1:34622 \r\n node0 2024-09-03T12:21:40.416429Z [httpworker.1] [src/rpc/request.cpp:232] [parse] [rpc] ThreadRPCServer method=keypoolrefill user=__cookie__ \r\n node0 2024-09-03T12:21:40.416556Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.418471Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.418535Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.418604Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.420653Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.420717Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.420785Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.421157Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.421550Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.423313Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.423355Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.423454Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.434980Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.435100Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.435243Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.444594Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.444698Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.444820Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.445269Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.445438Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.447113Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.447158Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.447238Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.449158Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.449218Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:40.449282Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: BEGIN TRANSACTION \r\n node0 2024-09-03T12:21:40.451075Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: INSERT or REPLACE into main values(?, ?) \r\n node0 2024-09-03T12:21:40.451142Z [httpworker.1] [src/wallet/sqlite.cpp:55] [TraceSqlCallback] [walletdb:trace] [/ci_container_base/ci/scratch/test_runner/test_runner_₿_🏃_20240903_120557/wallet_keypool_157/node0/regtest/default_wallet/wallet.dat] SQLite Statement: COMMIT TRANSACTION \r\n node0 2024-09-03T12:21:41.785180Z [http] [src/httpserver.cpp:304] [http_request_cb] [http] Received a POST request for / from 127.0.0.1:34622 \r\n node0 2024-09-03T12:21:41.785321Z [httpworker.3] [src/rpc/request.cpp:232] [parse] [rpc] ThreadRPCServer method=getwalletinfo user=__cookie__ \r\n test 2024-09-03T12:21:41.791000Z TestFramework (ERROR): Assertion failed \r\n Traceback (most recent call last):\r\n File \"/ci_container_base/test/functional/test_framework/test_framework.py\", line 132, in main\r\n self.run_test()\r\n File \"/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/test/functional/wallet_keypool.py\", line 141, in run_test\r\n assert_equal(nodes[0].getwalletinfo()[\"unlocked_until\"], 0)\r\n File \"/ci_container_base/test/functional/test_framework/util.py\", line 75, in assert_equal\r\n raise AssertionError(\"not(%s)\" % \" == \".join(str(arg) for arg in (thing1, thing2) + args))\r\n AssertionError: not(1725366101 == 0)"
CI failed
low
Critical
2,503,048,824
go
proposal: container/set: new package to provide a generic set type
This proposal is entirely based on the initial proposal and following discussion at https://github.com/golang/go/discussions/47331 and https://go.dev/blog/range-functions. There was a proposal to reopen this discussion 2 weeks ago (https://github.com/golang/go/issues/69033) but from what I understand proposals must include what's actually proposed. I apologize if this is not the proper way to do things. ### Proposal details ```go // Package set defines a Set type that holds a set of elements. package set // A Set is a set of elements of some comparable type. // The zero value of a Set is an empty set ready to use. type Set[Elem comparable] struct { // contains filtered or unexported fields } // Add adds elements to a set. func (s *Set[Elem]) Add(v ...Elem) // Remove removes elements from a set. // Elements that are not present are ignored. func (s *Set[Elem]) Remove(v ...Elem) // Contains reports whether v is in the set. func (s *Set[Elem]) Contains(v Elem) bool // Len returns the number of elements in s. func (s *Set[Elem]) Len() int // All is an iterator over the elements of s. func (s *Set[Elem]) All() iter.Seq[Elem] // Union constructs a new set containing the union of s1 and s2. func Union[Elem comparable](s1, s2 Set[Elem]) Set[Elem] // Intersection constructs a new set containing the intersection of s1 and s2. func Intersection[Elem comparable](s1, s2 Set[Elem]) Set[Elem] // Difference constructs a new set containing the elements of s1 that // are not present in s2. func Difference[Elem comparable](s1, s2 Set[Elem]) Set[Elem] ``` This is a partial copy of the API proposed at https://github.com/golang/go/discussions/47331, with the doc comment modified following https://github.com/golang/go/discussions/47331#discussioncomment-1168081, and a new `All` function that comes from https://go.dev/blog/range-functions.
Proposal
medium
Critical
2,503,059,577
terminal
sp; mf ... leaves cursor visible in the inactive pane
When I, for example, `wt sp cmd.exe; mf 1`, the cursor remains visible in the inactive pane. This is unlike moving focus in other situations; normally the cursor is hidden in an inactive pane.
Issue-Bug,Area-TerminalControl,Product-Terminal,Priority-3
low
Major