id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,633,949,077 | PowerToys | [File Preview] Allow for more fine-grained control of which file extensions to preview | ### Description of the new feature / enhancement
Allow for an editable list of file extensions whom should be previewed.
### Scenario when this would be used?
I really don't want to wait the loadingscreen when previewing simple .txt files. There the native windows one is totally fine for me.
For more complex files like .json with the formatting applied I'm willing to wait.
Having both (preview .json but not .txt) is currently not possible.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,633,983,464 | langchain | Given example fails | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import pandas as pd
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import JsonOutputParser
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
# Step 1: Load your DataFrame
df = pd.read_csv("https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv")
# Step 2: Initialize the LLM
model = ChatOpenAI(temperature=0)
# Step 3: Define the JsonOutputParser
parser = JsonOutputParser()
# Step 4: Create a Pandas DataFrame Agent
agent = create_pandas_dataframe_agent(model, df, verbose=True)
# Step 5: Chain the components
chain = agent | parser
# Step 6: Invoke the chain with a query
result = chain.invoke({"query": "How many people survived?"})
# Output the result
print(result)
```
### Error Message and Stack Trace (if applicable)
new AgentExecutor chain...
Thought: To find out how many people survived, I need to sum the 'Survived' column in the dataframe.
Action: python_repl_ast
Action Input: df['Survived'].sum()342I now know the final answer
Final Answer: 342 people survived.
> Finished chain.
Traceback (most recent call last):
File "/Users/joseph/Projects/chat_bot/scratch.py", line 29, in <module>
result = chain.invoke("How many people survived?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py", line 202, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1927, in _call_with_config
context.run(
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 396, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py", line 203, in <lambda>
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/langchain_core/load/serializable.py", line 125, in __init__
super().__init__(*args, **kwargs)
File "/Users/joseph/Projects/chat_bot/.venv/lib/python3.12/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for Generation
text
Input should be a valid string [type=string_type, input_value={'input': 'How many peopl... '342 people survived.'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/string_type
### Description
I was getting that failure in my code and went to the site and chatted with the input `how can I chain an Agent with a JsonOutputParser`
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
> Python Version: 3.12.7 (main, Oct 8 2024, 17:22:48) [Clang 16.0.0 (clang-1600.0.26.3)]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.3.4
> langsmith: 0.1.139
> langchain_experimental: 0.3.2
> langchain_huggingface: 0.1.2
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.5
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.2
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.53.0
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.2.1
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.20.1
> transformers: 4.46.1
> typing-extensions: 4.12.2
| โฑญ: core | low | Critical |
2,634,004,973 | pytorch | Add metal-flash-attention for MPS backend | ### ๐ The feature, motivation and pitch
We should add flash attention support for the MPS backend. @srush mentioned to me that https://github.com/philipturner/metal-flash-attention/ exists which we could try to incorporate into the SDPA as a git submodule. MIT license.
### Alternatives
Wait for Apple to release their own implementation?
### Additional context
_No response_
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,enhancement,module: mps | low | Minor |
2,634,006,560 | rust | Some `-Ctarget-feature`s must be restrained on RISCV | refs:
- https://github.com/rust-lang/rust/issues/114544
- https://github.com/rust-lang/rust/pull/116485 which was massively scaled back due to this open question
- https://github.com/rust-lang/rust/issues/116344
- https://github.com/rust-lang/rust/pull/129884
- https://github.com/rust-lang/rust/issues/131799
- https://github.com/rust-lang/rust/pull/131807
in discussion of the target modifiers RFC[^1], Ralf asks this question:
> That's actually fine on most targets, e.g. on x86 if you set the soft-float target feature then you can enable x87 and SSE and whatnot and keep using the soft-float ABI. Only aarch64 https://github.com/llvm/llvm-project/issues/110632 (from the targets I checked so far).
>
> The interesting question is what happens on RISC-V when I disable the FP instructions on a target that usually uses the hardfloat ABI. Sadly, what LLVM usually does in that case is silently fall back to the softfloat ABI. But I haven't checked for RISC-V.
cc @beetrees
[^1]: https://github.com/rust-lang/rfcs/pull/3716 | T-compiler,C-bug,O-riscv,A-floating-point,A-ABI,E-needs-investigation | medium | Major |
2,634,041,171 | terminal | [High Contrast] "Failed to reload settings" warning isn't legible | 
| Help Wanted,Issue-Bug,Area-Accessibility,Product-Terminal,Priority-3 | low | Critical |
2,634,139,769 | vscode | Allow settings with object widgets to configure labels for `item`/`value` | For object settings such as `files.associations`, it would nice if settings could customize the labels for `item`/`value` used in the settings ui

This may help user understand how to configure these
Related it may also be nice to have hover tooltips on the `item`/`value` headings | feature-request,settings-editor | low | Minor |
2,634,150,976 | pytorch | Better mergebot messages when reverting a PR | ### ๐ Describe the bug
See https://github.com/pytorch/pytorch/pull/139399#issuecomment-2455961719 - just message appearing out of nowhere giving now explanation why anyone want to revert the PR... (I suspect the reason is because mergebot incorrectly believes this PR and https://github.com/pytorch/pytorch/pull/139358 are dependent, but they are not)
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,634,152,406 | vscode | Validate tool input | We should validate the input/parameters of LanguageModelTool calls. We could extract part of the json language-service into vscode to do this. | feature-request,api | low | Major |
2,634,160,388 | pytorch | Unexpected type cast from DTensor to Tensor when applying SequenceParallel on RMSNorm. | ### ๐ Describe the bug
I am using Sequence parallel for my RMS implementation in LLaMA torchtune. During the forward pass, layer 0 propagates successfully, but layer 1 encountered a weird bug where if I print out the type and shape and placements of output and self.scale, it is displaying the desired configs ([b, s, d] tensor and a [d] tensor, both of type DTensor, one sharded along the sequence dimension (Shard(1)), and one is replicated). But during the multiplication, the stack traces suggests that the output tensor was casted to Tensor somewhere during the execution.
My reference RMSNorm implementation is here:
```
class RMSNorm(nn.Module):
def __init__(self, dim: int, eps: float = 1e-6):
"""
Initialize the RMSNorm normalization layer.
Args:
dim (int): The dimension of the input tensor.
eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.
Attributes:
eps (float): A small value added to the denominator for numerical stability.
weight (nn.Parameter): Learnable scaling parameter.
"""
super().__init__()
self.eps = eps
self.scale = nn.Parameter(torch.ones(dim))
def _norm(self, x):
"""
Apply the RMSNorm normalization to the input tensor.
Args:
x (torch.Tensor): The input tensor.
Returns:
torch.Tensor: The normalized tensor.
"""
return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
def forward(self, x):
"""
Forward pass through the RMSNorm layer.
Args:
x (torch.Tensor): The input tensor.
Returns:
torch.Tensor: The output tensor after applying RMSNorm.
"""
output = self._norm(x.float()).type_as(x)
print(f"x_norm local shape is {output.to_local().shape}, placement is {output.placements}, type is {type(output)}")
print(f"scale locals shape is {self.scale.to_local().shape}, placement is {self.scale.placements}, type is {type(self.scale)}, scale dimension is {self.scale.numel()}")
return output * self.scale
```
and it prints out the following output:
> x_norm local shape is torch.Size([2, 140, 4096]), placement is (Shard(dim=1),), type is <class 'torch.distributed.tensor.DTensor'>
> scale locals shape is torch.Size([4096]), placement is (Replicate(),), type is <class 'torch.distributed.tensor.DTensor'>, scale dimension is 4096
However the stack following stack traces (the call to _try_replicate_spec_for_scalar_tensor) suggest that a DTensor became Tensor somewhere in the middle since this function is only invoked if it passes a type test (isinstance(arg, torch.Tensor)). I printed out the arg and it says it is a tensor of shape [2, 140, 4096], suggesting that the output DTensor was casted to a Tensor somewhere in the middle.
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/ubuntu/torchtune/recipes/lora_finetune_tp_distributed.py", line 1085, in <module>
[rank0]: sys.exit(recipe_main())
[rank0]: File "/home/ubuntu/torchtune/torchtune/config/_parse.py", line 50, in wrapper
[rank0]: sys.exit(recipe_main(conf))
[rank0]: File "/home/ubuntu/torchtune/recipes/lora_finetune_tp_distributed.py", line 1079, in recipe_main
[rank0]: recipe.train()
[rank0]: File "/home/ubuntu/torchtune/recipes/lora_finetune_tp_distributed.py", line 956, in train
[rank0]: logits = self._model(tokens_repeated, mask=mask_repeated, input_pos=input_pos_repeated)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/ubuntu/torchtune/torchtune/modules/transformer.py", line 289, in forward
[rank0]: h = layer(h, freqs_cis = self.freqs_cis, mask=mask, input_pos=input_pos)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py", line 170, in forward
[rank0]: return self.checkpoint_fn( # type: ignore[misc]
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 496, in checkpoint
[rank0]: ret = function(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
[rank0]: return inner()
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1790, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/home/ubuntu/torchtune/torchtune/modules/transformer.py", line 76, in forward
[rank0]: norm_x = self.sa_norm(x)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
[rank0]: return inner()
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1790, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/home/ubuntu/torchtune/torchtune/modules/rms_norm.py", line 99, in forward
[rank0]: return output * self.scale
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 340, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 215, in dispatch
[rank0]: local_results = op_call(*local_tensor_args, **op_info.local_kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
[rank0]: return self._op(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 340, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 166, in dispatch
[rank0]: op_info = self.unwrap_to_op_info(op_call, args, kwargs)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 372, in unwrap_to_op_info
[rank0]: self._try_replicate_spec_for_scalar_tensor(op_call, arg, mesh)
[rank0]: File "/home/ubuntu/anaconda3/envs/torchtune/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 471, in _try_replicate_spec_for_scalar_tensor
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: aten.mul.Tensor: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 5999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] torchao==0.1
[pip3] torchaudio==2.5.1+cu121
[pip3] torchtune==0.0.0
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torchao 0.1 pypi_0 pypi
[conda] torchaudio 2.5.1+cu121 pypi_0 pypi
[conda] torchtune 0.0.0 pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,634,166,972 | tensorflow | WSL instruction outdated | Page URL: https://www.tensorflow.org/install/pip#windows-wsl2
At this point in time Ubuntu 24.04 requires (or, recommends) using a virtual environment to install Python packages. As a result, the following command will no longer work in WSL:
```
python3 -m pip install tensorflow[and-cuda]
```
Below is a summary of commands needed to set up a virtual environment and install Tensorflow with the CUDA libraries.
```
sudo apt install python3-venv
python3 -m venv tf
source ~/tf/bin/activate
pip install tensorflow[and-cuda]
``` | stat:awaiting tensorflower,wsl2 | low | Minor |
2,634,168,383 | rust | rustdoc search: trait methods should not show the same as inherent methods | did a search for "unchecked" and found this confusing snippet:

this the SliceIndex trait method.
additionally, perhaps implementations of trait methods should be deprioritized during name-based search? | T-rustdoc,C-discussion,A-rustdoc-search | low | Minor |
2,634,182,326 | ui | [bug]: Text size on mobile accordion | ### Describe the bug
The size of the text shrinks when you open an accordion on mobile. It seems to happen when you open one accordion then open the very next one after it
### Affected component/components
Accordion
### How to reproduce
1. Open an accordion on mobile
2. Open the very next accordion after it and the text will be smaller
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Android, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,634,193,328 | vscode | Implicit context widget isn't deduplicated against other attachments | 1. Have a file open in the editor and chat view open
2. From the editor context menu, the context picker, or DnD, attach the active editor to chat
3. :bug: the implicit context widget is still there, I would expect it to go away in favor of the persistent attachment and then come back the next time I open a file that isn't already attached | bug,panel-chat | low | Critical |
2,634,193,871 | vscode | Consider changing command name `Start in Editor with Current line` | I'm aligning the terminal context menu items/command in https://github.com/microsoft/vscode-copilot/issues/10149 and after that we have:

Should `Start in Editor with Current line` become `Editor Inline Chat with Current Line`? | under-discussion,inline-chat | low | Minor |
2,634,212,580 | langchain | create_retrieval_chain verbose not working | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.chains import create_retrieval_chain
retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain, verbose=True)
rag_chain.invoke({"input": question})
```
No verbose outcome.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
qa_chain = load_qa_chain(chat, verbose=True) is possible
retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain) is not allowed to have a verbose.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.3.5
> langsmith: 0.1.137
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,634,220,144 | node | More benchmarks for the `node:test` module. | The `benchmark/test_runner` folder currently contains benchmarks for `it` and `describe` functions. I suggest we expand these benchmarks to cover additional test runner features, including mocks, coverage, and various test modes.
Here are the functions that (IMO) should be benchmarked:
## Basic Testing
These tests should run with a custom reporter without any special logic to make the tests as accurate as possible.
- [ ] `test`
- [ ] Create a `test`
- [ ] Create a `test` when it's not running due to `only`
- [ ] Add a subtest
- [ ] Skip a test
- [ ] Via `skip: true`
- [ ] Via `t.skip()`
- [ ] Via `t.skip(...)`
- [ ] TODO tests
- [ ] Via `todo: true`
- [ ] Via `t.todo()`
- [ ] Via `t.todo(...)`
## Hooks
- [ ] `beforeEach`
- [ ] `afterEach`
- [ ] `before`
- [ ] `after`
## Reporters (#55757)
- [x] `dot`
- [x] `junit`
- [x] `spec`
- [x] `tap`
- [x] `lcov`
## Mocking
- [x] `mock.fn` (#55771)
- [ ] `mock.timers` for each API, and each sub-function
- [ ] `mock.module`
## Snapshots
- [ ] `snapshot.setDefaultSnapshotSerializers(serializers)`
- [ ] `snapshot.setResolveSnapshotPath(fn)`
- [ ] `t.assert.snapshot`
## `Coverage`
Use `--expose-internals` to exclusively test the coverage part
- [ ] Basic
- [ ] Excluding files
- [ ] Including files
- [ ] With source maps | benchmark,test_runner | low | Major |
2,634,220,206 | vscode | Webview Panel with `enableFindWidget` does not work after Moving Editor Group into New Window | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes - only extension is the one being developed
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.95.1 (Universal)
Commit: 65edc4939843c90c34d61f4ce11704f09d3e5cb6
Date: 2024-10-31T05:14:54.222Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
and
Version: 1.96.0-insider (Universal)
Commit: 231d37338a58ff22c223a7ed7d4c1e7142c513d2
Date: 2024-11-04T14:19:41.196Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
Steps to Reproduce:
1. Clone the Microsoft VS Code extensions sample - [webview-sample](https://github.com/microsoft/vscode-extension-samples/tree/main/webview-sample)
2. Replace `src/extension.ts` with:
```
import * as vscode from 'vscode';
export function activate(context: vscode.ExtensionContext) {
context.subscriptions.push(
vscode.commands.registerCommand('catCoding.start', () => {
WebviewIssuePanel.createOrShow(context.extensionUri);
})
);
}
class WebviewIssuePanel {
public static currentPanel: WebviewIssuePanel | undefined;
public static readonly viewType = 'webviewIssue';
private readonly _panel: vscode.WebviewPanel;
private readonly _extensionUri: vscode.Uri;
private _disposables: vscode.Disposable[] = [];
public static createOrShow(extensionUri: vscode.Uri) {
const column = vscode.window.activeTextEditor
? vscode.window.activeTextEditor.viewColumn
: undefined;
if (WebviewIssuePanel.currentPanel) {
WebviewIssuePanel.currentPanel._panel.reveal(column);
return;
}
const panel = vscode.window.createWebviewPanel(
WebviewIssuePanel.viewType,
'Webview Issue',
column || vscode.ViewColumn.One,
{
enableScripts: true,
enableFindWidget: true,
localResourceRoots: [vscode.Uri.joinPath(extensionUri, 'media')],
}
);
WebviewIssuePanel.currentPanel = new WebviewIssuePanel(panel, extensionUri);
}
public static revive(panel: vscode.WebviewPanel, extensionUri: vscode.Uri) {
WebviewIssuePanel.currentPanel = new WebviewIssuePanel(panel, extensionUri);
}
private constructor(panel: vscode.WebviewPanel, extensionUri: vscode.Uri) {
this._panel = panel;
this._extensionUri = extensionUri;
this._panel.title = 'Webview Issue';
this._panel.webview.html = this._getHtmlForWebview(this._panel.webview);
this._panel.onDidDispose(() => this.dispose(), null, this._disposables);
}
public dispose() {
WebviewIssuePanel.currentPanel = undefined;
this._panel.dispose();
while (this._disposables.length) {
const x = this._disposables.pop();
if (x) {
x.dispose();
}
}
}
private _getHtmlForWebview(webview: vscode.Webview) {
// Use a nonce to only allow specific scripts to be run
const nonce = getNonce();
return `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<!--
Use a content security policy to only allow loading images from https or from our extension directory,
and only allow scripts that have a specific nonce.
-->
<meta http-equiv="Content-Security-Policy" content="default-src 'none'; style-src ${webview.cspSource}; img-src ${webview.cspSource} https:; script-src 'nonce-${nonce}';">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Webview Issue</title>
</head>
<body>
Text to search for
</body>
</html>`;
}
}
function getNonce() {
let text = '';
const possible = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
for (let i = 0; i < 32; i++) {
text += possible.charAt(Math.floor(Math.random() * possible.length));
}
return text;
}
```
4. Start the sample and run `Cat Coding: Start cat coding session`
5. Use Ctrl-F to search for `text`, you will see it correctly highlighted:

6. Run `View: Move Editor Group into New Window` command to open the webview in a new window
7. Use Ctrl-F to search for `text`, you will see it does not highlight:

| bug,help wanted,webview | low | Critical |
2,634,232,448 | ui | [bug]: Sidebar toggle not working properly on mobile view. | ### Describe the bug
for my mobile and from the inspect element mobile view. The sidebar toggle not working at all for me.
### Affected component/components
Sidebar.tsx
### How to reproduce

### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Chrome, Ubuntu.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,634,234,432 | pytorch | `softmax` fails to commute with `permute` / `transpose` | ### ๐ Describe the bug
It should be the case that `sometensor.softmax(dim=d)` is the same as `sometensor.permute(...).softmax(dim=0).permute(...)` if the permutations are inverses. This is not the case, at least for float32:
```python
import torch
val = torch.tensor([[7.76308536529541015625, 7.17303562164306640625, 5.13778305053710937500]], dtype=torch.float32, device='cpu')
val1 = val.softmax(dim=1, dtype=torch.float32).permute(1, 0)
val2 = val.permute(1, 0).softmax(dim=0, dtype=torch.float32)
print(val1)
print(val2)
print((val1 - val2).abs().max())
```
```
tensor([[0.6147],
[0.3407],
[0.0445]])
tensor([[0.6147],
[0.3407],
[0.0445]])
tensor(2.9802e-08)
```
Note that, e.g., scipy, does not have this issue:
```python
import numpy as np
from scipy.special import softmax
val = np.array([[7.76308536529541015625, 7.17303562164306640625, 5.13778305053710937500]], dtype=np.float32)
val -= val.max()
val1s = softmax(val, axis=1).T
val2s = softmax(val.T, axis=0)
print(val1s)
print(val2s)
print(np.max(np.abs(val1s - val2s)))
print()
print(val1 - val1s)
print(val2 - val2s)
```
gives
```
[[0.6147349 ]
[0.34074736]
[0.04451779]]
[[0.6147349 ]
[0.34074736]
[0.04451779]]
0.0
tensor([[0.0000e+00],
[2.9802e-08],
[0.0000e+00]])
tensor([[0.],
[0.],
[0.]])
```
Discrepancy originally encountered by @ronakrm (I have not looked for what floating point values give different results under transpose)
### Versions
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.9 (main, Jul 30 2024, 20:31:18) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] eintorch==0.2.5
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.1
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.1
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.19.1
[conda] numpy 2.0.0 py310h52bbd9b_0 conda-forge
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,634,237,736 | PowerToys | Add PowerToys shortcuts to "Shortcut Guide" | ### Description of the new feature / enhancement
It would be helpful to have a quick way to see all active PowerToys shortcuts within the "Shortcut Guide," rather than manually opening the tool and searching for each one.
Having these shortcuts readily available in the guide, possibly in a different color to distinguish them from standard Windows shortcuts, would improve accessibility and usability.
### Scenario when this would be used?
When working on tasks that require multiple PowerToys toolsโlike resizing images, managing windows, or using FancyZones for layout managementโusers might need to quickly reference which shortcuts are available without disrupting their workflow. Instead of opening PowerToys settings, the "Shortcut Guide" could provide an at-a-glance view of relevant PowerToys shortcuts, enabling smoother multitasking and reducing interruptions.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,634,243,266 | react | [Compiler Bug]: Does not check for ref access in setState initial value function | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAMygOzgFwJYSYAEAKgmDgBQCURwAOsUXIRUTgIYBGASgmkQC8RKGAR80NRkRFiAypxwJKNIQD523CQDo4sGAkw5qAbkYBfEOaA
### Repro steps
The playground compiles, but it should fail (Rules of React)
### How often does this bug happen?
Every time
### What version of React are you using?
latest playground
### What version of React Compiler are you using?
latest playground | Type: Bug,Component: Optimizing Compiler | medium | Critical |
2,634,252,450 | puppeteer | [Bug]: No location or stack trace for Firefox console messages | ### Minimal, reproducible example
```TypeScript
const puppeteer = require("puppeteer");
const http = require("http");
let server = http.createServer((req, res) => res.end());
server.listen(8080, async () => {
await test("chrome");
await test("firefox");
process.exit(0);
});
async function test(engine) {
console.log(`Testing ${engine}`);
let browser = await puppeteer.launch({
browser: engine
});
let page = await browser.newPage();
page.on("console", log => {
console.log("Location:", log.location());
console.log("Stack trace:", log.stackTrace());
});
await page.goto("http://localhost:8080", {
waitUntil: "networkidle0"
});
await page.evaluate(() => {
console.log("This is a log");
});
}
```
### Background
When running on Firefox, `ConsoleMessage`s have no `location()` or `stackTrace()` information.
### Expectation
Expected output:
```
Testing chrome
Location: {
url: 'pptr:evaluate;test%20(%2Fworkspaces%2Ftest%2Fbug.js%3A25%3A13)',
lineNumber: 1,
columnNumber: 10
}
Stack trace: [
{
url: 'pptr:evaluate;test%20(%2Fworkspaces%2Ftest%2Fbug.js%3A25%3A13)',
lineNumber: 1,
columnNumber: 10
}
]
Testing firefox
Location: {
url: 'pptr:evaluate;test%20(%2Fworkspaces%2Ftest%2Fbug.js%3A25%3A13)',
lineNumber: 1,
columnNumber: 10
}
Stack trace: [
{
url: 'pptr:evaluate;test%20(%2Fworkspaces%2Ftest%2Fbug.js%3A25%3A13)',
lineNumber: 1,
columnNumber: 10
}
]
```
### Reality
Actual output:
```
Testing chrome
Location: {
url: 'pptr:evaluate;test%20(%2Fworkspaces%2Fholoprint%2Fbug.js%3A25%3A13)',
lineNumber: 1,
columnNumber: 10
}
Stack trace: [
{
url: 'pptr:evaluate;test%20(%2Fworkspaces%2Fholoprint%2Fbug.js%3A25%3A13)',
lineNumber: 1,
columnNumber: 10
}
]
Testing firefox
Location: {}
Stack trace: []
```
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
23.7.0
### Node version
20.17.0
### Package manager
npm
### Package manager version
10.8.2
### Operating system
Linux | bug,confirmed,bidi,P3,firefox | low | Critical |
2,634,263,841 | yt-dlp | Localizing log messages | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Implement localization of command line log messages for non-English speakers
Japanese, Chinese , etc
(I used Google Translate to write)
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement,triage | low | Critical |
2,634,272,074 | PowerToys | ๅฏปๆพ้ผ ๆ | ### Microsoft PowerToys version
0.80.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
ๅฏปๆพ้ผ ๆ ็ๆถๅๅช่่ไบ็งปๅจ้ๅบฆ๏ผ ๅธๆๅฏไปฅๅจๅๅฐๆๅจ็ๆถๅไธ่งฆๅ
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,634,278,881 | langchain | Validation Error (on arguments) from Pydantic for invoking tool_call | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_aws import ChatBedrockConverse
from typing import List
from langchain_core.tools import tool
import os
import pprint
from dotenv import load_dotenv
load_dotenv()
@tool
def multiply(number_list: List[float]) -> float:
"""
Multiply all numbers in number_list
Args:
number_list: List of number to multiply
"""
value = 1
for n in number_list:
value += n
return value
MODEL = 'meta.llama3-1-405b-instruct-v1:0'
aws_access_key = os.environ['AWS_ACCESS_KEY']
aws_secret_key = os.environ['AWS_SECRET_KEY']
aws_region = os.environ['AWS_REGION']
llm1 = ChatBedrockConverse(model=MODEL, temperature=0.0, top_p=0.9, max_tokens=None, region_name=aws_region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key)
llm1 = llm1.bind_tools([multiply])
tool_list = {'multiply': multiply}
prompt = "What is the product of 2.9 and 3.55"
messages = [HumanMessage(prompt)]
response = llm1.invoke(messages)
messages.append(response)
for tool_call in response.tool_calls:
selected_tool = tool_list[tool_call["name"].lower()]
tool_message = selected_tool.invoke(tool_call)
messages.append(tool_message)
pprint.pprint(messages, indent=2)
```
I store the AWS credentials in an env file.
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/xyz/Documents/test_project/test.py", line 40, in <module>
tool_message = selected_tool.invoke(tool_call)
File "/Users/xyz/Documents/test_project/venv/lib/python3.10/site-packages/langchain_core/tools/base.py", line 484, in invoke
return self.run(tool_input, **kwargs)
File "/Users/xyz/Documents/test_project/venv/lib/python3.10/site-packages/langchain_core/tools/base.py", line 689, in run
raise error_to_raise
File "/Users/xyz/Documents/test_project/venv/lib/python3.10/site-packages/langchain_core/tools/base.py", line 651, in run
tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input)
File "/Users/xyz/Documents/test_project/venv/lib/python3.10/site-packages/langchain_core/tools/base.py", line 574, in _to_args_and_kwargs
tool_input = self._parse_input(tool_input)
File "/Users/xyz/Documents/test_project/venv/lib/python3.10/site-packages/langchain_core/tools/base.py", line 515, in _parse_input
result = input_args.model_validate(tool_input)
File "/Users/xyz/Documents/test_project/venv/lib/python3.10/site-packages/pydantic/main.py", line 596, in model_validate
return cls.__pydantic_validator__.validate_python(
pydantic_core._pydantic_core.ValidationError: 1 validation error for multiply
number_list
**Input should be a valid list [type=list_type, input_value='[2.9, 3.55]', input_type=str]**
For further information visit https://errors.pydantic.dev/2.9/v/list_type
### Description
I am using a Llama 3.1 405B with tool call for a multiplication problem. Although the LLM respond with a tool_call in the message, the invocation fails due to an issue with Pydantic validation.
This appears to be because the list is represented as a string in the argument to the function.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Feb 14 2024, 23:56:25) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langsmith: 0.1.138
> langchain_aws: 0.2.4
> langchain_groq: 0.2.0
> langchain_openai: 0.2.5
> langchain_text_splitters: 0.3.1
> langgraph: 0.2.44
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> boto3: 1.35.52
> groq: 0.11.0
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.2
> langgraph-sdk: 0.1.35
> numpy: 1.26.4
> openai: 1.53.0
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,634,299,543 | vscode | Search Only in Open Editors does not work if file name has square bracket | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2
- OS Version: Windows 11
Steps to Reproduce:
1. Name a file with closed or open square brackets []
2. Open in editor and search only in open editors

| bug,search | low | Critical |
2,634,311,452 | pytorch | Adding backend-specific autograd kernel for aten builtin operators breaks `torch.compile` | ### ๐ Describe the bug
In the following code we add a backend-specific autograd kernel(e.g. AutogradCUDA) for aten builtin operator "tanh". Originally, it does not have a kernel for "AutogradCUDA", but a "CUDA" kernel. The derivative of tanh is defiend in `derivatives.yaml`. We subclass `torch.autograd.Function` and register our implementation with "AutogradCUDA" key to make sure that our self-defined backward is used simply by using `torch.tanh`.
```python
import torch
def tanh_forward(x):
x.data_ptr() # simulate a custom kernel that typically accesses its inputs data pointer
return 2.0 * torch.sigmoid(2.0 * x) - 1.0
def tanh_backward(y, dy):
y.data_ptr() # simulate a custom kernel that typically accesses its inputs data pointer
return dy * (1.0 - y ** 2)
class Tanh(torch.autograd.Function):
@staticmethod
def forward(ctx, A):
out = tanh_forward(A)
ctx.save_for_backward(out)
return out
@staticmethod
def backward(ctx, out_grad):
(out,) = ctx.saved_tensors
in_grad = tanh_backward(out, out_grad)
return in_grad
def tanh(A):
return Tanh.apply(A)
aten_lib = torch.library.Library("aten", "IMPL")
aten_lib.impl("tanh", tanh, "AutogradCUDA")
```
Then if we try using it with `torch.compile`, we get an error.
```python
def f(x):
return torch.tanh(x)
x = torch.randn(10, device="cuda")
F = torch.compile(f)
out = F(x)
print(out)
```
The main issue is that when doing dynamo tracing with FakeTensors(which should be fake tensors of meta tensors, so only the Meta kernels are called), the self-defined `tanh` above is called with fake tensors of tensor on cuda device as inputs.
`FakeTensor(..., device='cuda:0', size=(10,))`
The error message is
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method tanh of type object at 0x7f067d585a40>(*(FakeTensor(..., device='cuda:0', size=(10,)),), **{}):
Cannot access data pointer of Tensor (e.g. FakeTensor, FunctionalTensor). If you're using torch.compile/export/fx, it is likely that we are erroneously tracing into a custom kernel. To fix this, please wrap the custom kernel into an opaque custom op. Please see the following for details: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
```
Since we are calling `.data_ptr()` on a fake tensor.
When only register our self-defined tanh with "CUDA" key(replacing the last line with `aten_lib.impl("tanh", tanh, "CUDA")`), the above code runs successfully. Adding a breakpoint at tanh's Meta kernel at `tanh` at `torch/_refs/__init__.py`, we can see that the input argument is a fake tensor on meta device.
`FakeTensor(..., device='meta', size=(10,))`
This is the main reason why the kernel for `AutogradCUDA` is called.(Since the fake tensor is on CUDA instead of Meta device).
Is there something wrong with the way in which FakeTensors work with aten library, or more specifically, backend-specific autograd kernels? Thank you.
### Versions
```text
Collecting environment information...
PyTorch version: 2.5.0a0+git32f585d
Is debug build: True
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-97-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.52
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 545.23.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.9 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 50 MiB (40 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] optree==0.10.0
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0a0+git32f585d
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,634,321,598 | pytorch | less GPU memory used in Centos than ubuntu | ### ๐ Describe the bug
version:
python 3.10
torch 2.1.0
cuda 12.1
put breakpoint in row x = x.detach()
```python
import torch
import torchvision
class BoQ_DinoV2(torch.nn.Module):
AVAILABLE_MODELS = [
'dinov2_vits14',
'dinov2_vitb14',
'dinov2_vitl14',
'dinov2_vitg14'
]
def __init__(
self,
backbone_name="dinov2_vitb14",
num_unfrozen_blocks=2,
):
super().__init__()
self.backbone_name = backbone_name
self.num_unfrozen_blocks = num_unfrozen_blocks
# make sure the backbone_name is in the available models
if self.backbone_name not in self.AVAILABLE_MODELS:
raise ValueError(f"Backbone {self.backbone_name} is not recognized!"
f"Supported backbones are: {self.AVAILABLE_MODELS}")
self.dino = torch.hub.load('facebookresearch/dinov2', self.backbone_name)
# self.dino = torch.hub.load(repo_or_dir="/home/ly/hub/dinov2",
model=backbone_name, trust_repo=True, source='local')
# freeze the patch embedding and positional encoding
self.dino.patch_embed.requires_grad_(False)
self.dino.pos_embed.requires_grad_(False)
# freeze the first blocks, keep only the last num_unfrozen_blocks trainable
for i in range(len(self.dino.blocks) - self.num_unfrozen_blocks):
self.dino.blocks[i].requires_grad_(False)
self.out_channels = self.dino.embed_dim
def forward(self, x):
B, _, H, W = x.shape
# No need to compute gradients for frozen layers
with torch.no_grad():
x = self.dino.prepare_tokens_with_masks(x)
for blk in self.dino.blocks[ : -self.num_unfrozen_blocks]:
x = blk(x)
print(x.shape)
x = x.detach()
# Last blocks are trained
with torch.cuda.amp.autocast():
for blk in self.dino.blocks[-self.num_unfrozen_blocks : ]:
x = blk(x)
x = x[:, 1:] # remove the [CLS] token
# reshape the output tensor to B, C, H, W
_, _, C = x.shape # we know C == self.dino.embed_dim, but still...
x = x.permute(0, 2, 1).contiguous().view(B, C, H//14, W//14)
return x
if __name__ == '__main__':
input = torch.randn(40,3,308,308).cuda()
model = BoQ_DinoV2().cuda()
output = model(input)
print(output.shape)
```


### Versions
Collecting environment information...
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 36
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
L1d cache: 3.4 MiB (72 instances)
L1i cache: 2.3 MiB (72 instances)
L2 cache: 90 MiB (72 instances)
L3 cache: 108 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-35
NUMA node1 CPU(s): 36-71
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.1.0
[pip3] pytorch-metric-learning==2.3.0
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchmetrics==1.5.1
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] Could not collect
# -----------------------------------------------------another machine -----------------------------------------------------
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.2.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.17
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.102.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
ๅบง๏ผ 2
NUMA ่็น๏ผ 2
ๅๅ ID๏ผ GenuineIntel
CPU ็ณปๅ๏ผ 6
ๅๅท๏ผ 85
ๅๅทๅ็งฐ๏ผ Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
ๆญฅ่ฟ๏ผ 7
CPU MHz๏ผ 999.963
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS๏ผ 4400.00
่ๆๅ๏ผ VT-x
L1d ็ผๅญ๏ผ 32K
L1i ็ผๅญ๏ผ 32K
L2 ็ผๅญ๏ผ 1024K
L3 ็ผๅญ๏ผ 36608K
NUMA ่็น0 CPU๏ผ 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA ่็น1 CPU๏ผ 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba rsb_ctxsw ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.1.0
[pip3] pytorch-metric-learning==2.3.0
[pip3] torch==2.1.0+cu121
[pip3] torchaudio==2.1.0+cu121
[pip3] torchmetrics==1.1.2
[pip3] torchvision==0.16.0+cu121
[pip3] triton==2.1.0
[conda] Could not collect | needs reproduction,module: memory usage,triaged | low | Critical |
2,634,323,297 | material-ui | [Autocomplete] ListboxProps external ref warning in vs code | ### Steps to reproduce
Link to live example: https://codesandbox.io/p/sandbox/listboxprops-ref-bn67hk?file=%2Fsrc%2FAutocomplete.tsx
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
Unable to build project due to typescript issue/warning
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Windows 11 10.0.22631
Binaries:
Node: 19.1.0 - C:\Program Files\nodejs\node.EXE
npm: 8.19.3 - C:\Program Files\nodejs\npm.CMD
pnpm: Not Found
Browsers:
Chrome: Not Found
Edge: Chromium (129.0.2792.89)
npmPackages:
@emotion/react: 11.10.5 => 11.10.5
@emotion/styled: 11.10.5 => 11.10.5
@mui/base: 5.0.0-alpha.103
@mui/core-downloads-tracker: 5.15.20
@mui/icons-material: ^5.11.9 => 5.15.20
@mui/lab: 5.0.0-alpha.105 => 5.0.0-alpha.105
@mui/material: 5.10.11 => 5.10.11
@mui/private-theming: 5.15.20
@mui/styled-engine: 5.15.14
@mui/system: 5.15.20
@mui/types: 7.2.14
@mui/utils: 5.15.20
@mui/x-data-grid: 5.17.9 => 5.17.9
@types/react: 18.0.24 => 18.0.24
react: 18.2.0 => 18.2.0
react-dom: 18.2.0 => 18.2.0
typescript: 4.8.4 => 4.8.4
```
</details>
**Search keywords**: [Autocomplete] ListboxProps external ref warning in vs code | typescript,component: autocomplete,v5.x,v6.x | low | Minor |
2,634,355,750 | vscode | Settings item loses focus unexpectedly |
Type: <b>Bug</b>
1. Open settings panel
2. Change some items, like Editor: Font Size ... and wait several seconds
3. The focus will be moved to the next settings item
...
It is very likely to make some error settings if the focus moved unexpectedly. I changed some font settings, this bug let me change the settings very hard and easy to make mistake.
VS Code version: Code 1.95.1 (65edc4939843c90c34d61f4ce11704f09d3e5cb6, 2024-10-31T05:14:54.222Z)
OS version: Windows_NT x64 10.0.22621
Modes:
Remote OS version: Linux x64 5.4.241-1-tlinux4-0017.7
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i9-12900K (24 x 3187)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.68GB (37.69GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: 9.135.147.116|
|OS|Linux x64 5.4.241-1-tlinux4-0017.7|
|CPUs|Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz (32 x 0)|
|Memory (System)|62.54GB (51.25GB free)|
|VM|0%|
</details><details><summary>Extensions (34)</summary>
Extension|Author (truncated)|Version
---|---|---
Bookmarks|ale|13.5.0
continue|Con|0.9.225
rust-syntax|dus|0.6.1
font-switcher|eva|4.1.0
vscode-drawio|hed|1.6.6
remote-containers|ms-|0.388.0
remote-ssh|ms-|0.115.0
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
remote-explorer|ms-|0.4.3
ayu|tea|1.0.5
vim|vsc|1.28.1
rust-bundle|1Yi|1.0.0
terminal-to-here|cae|0.0.2
format-json|Cle|1.0.3
rust-syntax|dus|0.6.1
go-interface-annotations|gal|0.1.0
go|gol|0.42.1
todo-tree|Gru|0.0.226
vscode-edit-csv|jan|0.10.0
protobuf|kan|1.1.6
gitless|maa|11.7.2
rainbow-csv|mec|3.12.0
git-graph|mhu|1.30.0
debugpy|ms-|2024.12.0
python|ms-|2024.18.0
vscode-pylance|ms-|2024.11.1
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.10
cpptools-extension-pack|ms-|1.3.0
rust-analyzer|rus|0.3.2172
cody-ai|sou|1.40.2
cmake|twx|0.0.17
(3 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | bug,settings-editor,confirmation-pending | low | Critical |
2,634,374,985 | tensorflow | Create a trainable tensorflow or LiteRT (with signatures) graph from a frozen tensorflow model | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf 2.17
### Custom code
No
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I am working with a frozen TensorFlow model (saved as a .pb file) and am exploring ways to make it trainable again, in its original TensorFlow format and eventually a LiteRT model with [signatures to train](https://ai.google.dev/edge/litert/models/ondevice_training).
The goal is to restore training capabilities (such as fine-tuning or continued training) from a model that has already been frozen. I would appreciate guidance on how to approach this scenario.
### Key Points:
#### Frozen TensorFlow Model:
I have a TensorFlow model that was previously trained and saved in its frozen state (as a .pb file). This model no longer contains trainable variables, as they have been converted into constants. However, I would like to continue training this model on new data or fine-tune it for a different task.
**Question 1**: What is the recommended approach to restore a frozen TensorFlow model to a state where it can be trained or fine-tuned again? Specifically, how do I extract the original model architecture and variables from the frozen graph to make it trainable once more?
**Question 2**: If the frozen model has layers that are not needed for retraining, how can I unfreeze and selectively fine-tune only some parts of the model while keeping others frozen?
#### LiteRT Model:
I am also exploring the possibility of taking a LiteRT model that was previously trained & converted to LiteRT, and then restoring its training capability.
**Question 3**: Is it possible to convert a TensorFlow Lite model back into TensorFlow (with the appropriate weights) and add appropriate training signatures and then convert it back to a trainable LiteRT model? If so, what are the best practices, and any potential challenges involved in such a conversion?
**Question 4**: If not, and I want to fine-tune a model that has been converted to LiteRT, would it be better to work with the original TensorFlow model and re-convert it to LiteRT after training, rather than trying to restore training from the LiteRT version?
#### Expected Outcome:
Any guidance on the best approach to make a frozen model trainable again (whether in TensorFlow or TensorFlow Lite) would be greatly appreciated. Specifically, Iโm seeking clarity on:
- How to restore training capability from a frozen .pb file in TensorFlow.
- How to manage layer freezing or unfreezing during fine-tuning.
- Whether a TensorFlow Lite model can be retrained, or if it's more efficient to work with the TensorFlow version instead.
### Standalone code to reproduce the issue
```shell
_
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:support,comp:lite,2.17 | low | Critical |
2,634,377,031 | langchain | AzureCosmosDB vector store filter not working? | ### Example Code
```
vectorstore = AzureCosmosDBVectorSearch.from_connection_string(
connection_string=self.uri,
namespace=self.namespace,
embedding=self.embedding_model,
index_name = self.index_name,
embedding_key=self.embedding_key
)
retriever = vectorstore.as_retriever(
search_kwargs={
"k": 10,
"filter": {"username": "james"}
}
)
```
### Description
As per the documentation, i added the filter to the search_kwargs argument, however the filter is not working and is returning me all the documents.
| โฑญ: vector store | low | Minor |
2,634,391,229 | PowerToys | New+ Better Support for MS Office Template Files (i.e., dotx, dotm, xltx, xltm, potx, potm) | ### Description of the new feature / enhancement
New+ will support using MS Office template files (i.e., dotx, dotm, xltx, xltm, potx, potm) to create MS Office files (i.e., docx, docm, xlsx, xlsm, potx, potm) in a selected location.
As it currently stands, if the New+ templates folder includes MS Office template files such as Example.dotx, then it will create an exact copy of Example.dotx in the selected location. When the user double clicks on the new file, it opens another file that must be saved as a regular MS Office file (i.e., Example.docx) for editing.
It would be great if New+ would take Example.dotx and create Example.docx in the selected location. There should be option to allow users to convert macro enabled templates (i.e., dotm, xltm, potm) into either a regular MS Office file (i.e., docx, xlsx, potx) or a macro enabled MS Office file (i.e., docm, xlsm, potm).
### Scenario when this would be used?
This is useful when users want the New+ templates folder to include MS Office templates. For example, I have a couple dozen MS Office templates that I use with Word. I would like to set the New+ templates folder to the folder where the MS Office templates are stored and then use New+ to create regular MS Office files from the templates.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,634,428,339 | godot | [Godot 4.4 dev3] Fallback to OpenGL 3 not working on iPad 6th gen | ### Tested versions
- Reproducible in 4.4 dev3
### System information
Ipad 6th gen - iPad OS 17.5.1
### Issue description
Title, I tried deploying and opening an app on an iPad 6th gen (iPadOS 17.5.1), but it doesn't work (manually setting the renderer from `forward_plus` or `mobile` to `gl_compatibility` does work, but it makes the fallback option completely pointless).
Console output on Xcode:
```
Godot Engine v4.4.dev3.official.f4af8201b - https://godotengine.org
Metal 3.1 - Forward Mobile - Using Device #0: Apple - Apple A10 GPU (Apple3)
MTLValidateFeatureSupport, line 6575: error 'Texture Cube Array is only supported on MTLGPUFamilyApple4 and later.'
MTLValidateFeatureSupport:6575: failed assertion 'Texture Cube Array is only supported on MTLGPUFamilyApple4 and later.'
```
### Steps to reproduce
Export iOS project and open it.
### Minimal reproduction project (MRP)
Can be tested with an empty project, as the fallback option in the project settings is enabled by default, so no MRP needed. | bug,platform:ios,topic:rendering | low | Critical |
2,634,437,329 | rust | Inefficient implementation of `PartialEq` for nested (fieldless) enums | I'm not sure if this is the right place for this (might be LLVM to blame), just a bit of inefficient code that I noticed.
https://godbolt.org/z/K57orYj5h
The only difference between the two functions `eq` and `matches` is the use of `==` and `matches!` for the enum comparison. The generated code for `eq` includes two calls to `PartialEq` for `Outer`, whereas the code for `matches` has a much simpler (inline) comparison.
Also, the generated code for `PartialEq` seems very inefficient, given the enum is just a two-byte value that can be directly compared.
If you tinker with the enum definitions it's not hard to cause `eq` to optimise exactly like `matches`.
Example copied here
```rust
#[derive(PartialEq)]
pub enum InnerInner {
One,
Two,
}
#[derive(PartialEq)]
pub enum Inner {
One,
Two,
Const(InnerInner),
}
#[derive(PartialEq)]
pub enum Outer {
One(Inner),
Two,
Three,
Four,
Five(Inner),
Six(Inner),
Seven(Inner),
Eight(Inner),
}
#[no_mangle]
pub fn eq(t: Outer, b: &mut bool) {
let token_type = if t == Outer::One(Inner::One) {
Outer::One(Inner::One)
} else {
Outer::Two
};
*b = token_type == Outer::Two;
}
#[no_mangle]
pub fn matches(t: Outer, b: &mut bool) {
let token_type = if matches!(t, Outer::One(Inner::One)) {
Outer::One(Inner::One)
} else {
Outer::Two
};
*b = matches!(token_type, Outer::Two);
}
``` | A-LLVM,I-slow,A-codegen,T-compiler,I-heavy,C-optimization,A-enum | low | Major |
2,634,474,578 | ollama | Tencent-Hunyuan-Large-MoE-389B-A52B | # https://huggingface.co/tencent/Tencent-Hunyuan-Large
### Model Introduction
With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters.
By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
### Introduction to Model Technical Advantages
#### Model
- **High-Quality Synthetic Data**: By enhancing training with synthetic data, Hunyuan-Large can learn richer representations, handle long-context inputs, and generalize better to unseen data.
- **KV Cache Compression**: Utilizes Grouped Query Attention (GQA) and Cross-Layer Attention (CLA) strategies to significantly reduce memory usage and computational overhead of KV caches, improving inference throughput.
- **Expert-Specific Learning Rate Scaling**: Sets different learning rates for different experts to ensure each sub-model effectively learns from the data and contributes to overall performance.
- **Long-Context Processing Capability**: The pre-trained model supports text sequences up to 256K, and the Instruct model supports up to 128K, significantly enhancing the ability to handle long-context tasks.
- **Extensive Benchmarking**: Conducts extensive experiments across various languages and tasks to validate the practical effectiveness and safety of Hunyuan-Large.
## Benchmark Evaluation
**Hunyuan-Large pre-trained model** achieves the best overall performance compared to both Dense and MoE based
competitors having similar activated parameter sizes. For aggregated benchmarks such as MMLU, MMLU-Pro, and CMMLU,
Hunyuan-Large consistently achieves the best performance, confirming its comprehensive abilities on aggregated tasks.
Hunyuan-Large also shows superior performance in commonsense understanding and reasoning, and classical NLP tasks
such as QA and reading comprehension tasks (e.g., CommonsenseQA, PIQA and TriviaQA).
For the mathematics capability, Hunyuan-Large outperforms all baselines in math datasets of GSM8K and MATH,
and also gains the best results on CMATH in Chinese.We also observe that Hunyuan-Large achieves the overall
best performance in all Chinese tasks (e.g., CMMLU, C-Eval).
| Model | LLama3.1-405B | LLama3.1-70B | Mixtral-8x22B | DeepSeek-V2 | Hunyuan-Large |
|------------------|---------------|--------------|---------------|-------------|---------------|
| MMLU | 85.2 | 79.3 | 77.8 | 78.5 | **88.4** |
| MMLU-Pro | **61.6** | 53.8 | 49.5 | - | 60.2 |
| BBH | 85.9 | 81.6 | 78.9 | 78.9 | **86.3** |
| HellaSwag | - | - | **88.7** | 87.8 | 86.8 |
| CommonsenseQA | 85.8 | 84.1 | 82.4 | - | **92.9** |
| WinoGrande | 86.7 | 85.3 | 85.0 | 84.9 | **88.7** |
| PIQA | - | - | 83.6 | 83.7 | **88.3** |
| NaturalQuestions | - | - | 39.6 | 38.7 | **52.8** |
| DROP | 84.8 | 79.6 | 80.4 | 80.1 | **88.9** |
| ARC-C | **96.1** | 92.9 | 91.2 | 92.4 | 95.0 |
| TriviaQA | - | - | 82.1 | 79.9 | **89.2** |
| CMMLU | - | - | 60.0 | 84.0 | **90.2** |
| C-Eval | - | - | 59.6 | 81.7 | **91.9** |
| C3 | - | - | 71.4 | 77.4 | **82.3** |
| GSM8K | 89.0 | 83.7 | 83.7 | 79.2 | **92.8** |
| MATH | 53.8 | 41.4 | 42.5 | 43.6 | **69.8** |
| CMATH | - | - | 72.3 | 78.7 | **91.3** |
| HumanEval | 61.0 | 58.5 | 53.1 | 48.8 | **71.4** |
| MBPP | **73.4** | 68.6 | 64.2 | 66.6 | 72.6 |
**Hunyuan-Large-Instruct** achieves consistent improvements on most types of tasks compared to LLMs having similar
activated parameters, indicating the effectiveness of our post-training. Delving into the model performance
in different categories of benchmarks, we find that our instruct model achieves the best performance on MMLU and MATH dataset.
Notably, on the MMLU dataset, our model demonstrates a significant improvement, outperforming the LLama3.1-405B model by 2.6%.
This enhancement is not just marginal but indicative of the Hunyuan-Large-Instructโs superior understanding and reasoning
capabilities across a wide array of language understanding tasks. The modelโs prowess is further underscored in its performance
on the MATH dataset, where it surpasses the LLama3.1-405B by a notable margin of 3.6%.
Remarkably, this leap in accuracy is achieved with only 52 billion activated parameters, underscoring the efficiency of our model.
| Model | LLama3.1 405B Inst. | LLama3.1 70B Inst. | Mixtral 8x22B Inst. | DeepSeekV2.5 Chat | Hunyuan-Large Inst. |
|----------------------|---------------------|--------------------|---------------------|-------------------|---------------------|
| MMLU | 87.3 | 83.6 | 77.8 | 80.4 | **89.9** |
| CMMLU | - | - | 61.0 | - | **90.4** |
| C-Eval | - | - | 60.0 | - | **88.6** |
| BBH | - | - | 78.4 | 84.3 | **89.5** |
| HellaSwag | - | - | 86.0 | **90.3** | 88.5 |
| ARC-C | **96.9** | 94.8 | 90.0 | - | 94.6 |
| GPQA_diamond | **51.1** | 46.7 | - | - | 42.4 |
| MATH | 73.8 | 68.0 | 49.8 | 74.7 | **77.4** |
| HumanEval | 89.0 | 80.5 | 75.0 | 89.0 | **90.0** |
| AlignBench | 6.0 | 5.9 | 6.2 | 8.0 | **8.3** |
| MT-Bench | 9.1 | 8.8 | 8.1 | 9.0 | **9.4** |
| IFEval strict-prompt | **86.0** | 83.6 | 71.2 | - | 85.0 |
| Arena-Hard | 69.3 | 55.7 | - | 76.2 | **81.8** |
| AlpacaEval-2.0 | 39.3 | 34.3 | 30.9 | 50.5 | **51.8** |
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{sun2024hunyuanlargeopensourcemoemodel,
title={Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent},
author={Xingwu Sun and Yanfeng Chen and Yiqing Huang and Ruobing Xie and Jiaqi Zhu and Kai Zhang and Shuaipeng Li and Zhen Yang and Jonny Han and Xiaobo Shu and Jiahao Bu and Zhongzhi Chen and Xuemeng Huang and Fengzong Lian and Saiyong Yang and Jianfeng Yan and Yuyuan Zeng and Xiaoqin Ren and Chao Yu and Lulu Wu and Yue Mao and Tao Yang and Suncong Zheng and Kan Wu and Dian Jiao and Jinbao Xue and Xipeng Zhang and Decheng Wu and Kai Liu and Dengpeng Wu and Guanghui Xu and Shaohua Chen and Shuang Chen and Xiao Feng and Yigeng Hong and Junqiang Zheng and Chengcheng Xu and Zongwei Li and Xiong Kuang and Jianglu Hu and Yiqi Chen and Yuchi Deng and Guiyang Li and Ao Liu and Chenchen Zhang and Shihui Hu and Zilong Zhao and Zifan Wu and Yao Ding and Weichao Wang and Han Liu and Roberts Wang and Hao Fei and Peijie She and Ze Zhao and Xun Cao and Hai Wang and Fusheng Xiang and Mengyuan Huang and Zhiyuan Xiong and Bin Hu and Xuebin Hou and Lei Jiang and Jiajia Wu and Yaping Deng and Yi Shen and Qian Wang and Weijie Liu and Jie Liu and Meng Chen and Liang Dong and Weiwen Jia and Hu Chen and Feifei Liu and Rui Yuan and Huilin Xu and Zhenxiang Yan and Tengfei Cao and Zhichao Hu and Xinhua Feng and Dong Du and Tinghao She and Yangyu Tao and Feng Zhang and Jianchen Zhu and Chengzhong Xu and Xirui Li and Chong Zha and Wen Ouyang and Yinben Xia and Xiang Li and Zekun He and Rongpeng Chen and Jiawei Song and Ruibin Chen and Fan Jiang and Chongqing Zhao and Bo Wang and Hao Gong and Rong Gan and Winston Hu and Zhanhui Kang and Yong Yang and Yuhong Liu and Di Wang and Jie Jiang},
year={2024},
eprint={2411.02265},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.02265},
}
```
| model request | low | Major |
2,634,579,889 | pytorch | DTensor silent failure of index slice | ### ๐ Describe the bug
In the four variants of `nn.init.zeros_` at the bottom: the expected behavior is to see some zeros in the printed matrix. Howver, when slicing a partial matrix `[:2]`, none of the weights are zero. Same behavior was observed for `nn.init.constant_`.
Run command: `torchrun --standalone --nnodes=1 --nproc-per-node=1 dtensor_1nodes.py`
```
import torch
from torch.distributed._tensor import DTensor, Shard, Replicate, distribute_tensor, distribute_module, init_device_mesh
import os
import torch.distributed as dist
import torch.nn as nn
world_size = int(os.getenv("WORLD_SIZE", None))
local_rank = int(os.getenv("LOCAL_RANK", None))
global_rank = int(os.getenv("RANK", None))
print(f"world_size: {world_size}, local_rank: {local_rank}, global_rank: {global_rank}")
dist.init_process_group(
backend="cuda:nccl",
init_method=None,
world_size=world_size,
rank=global_rank,
device_id=torch.device(f"cuda:{local_rank}"),
)
torch.cuda.set_device(local_rank)
device_mesh = init_device_mesh("cuda", (1,))
local_tensor = torch.randn((8, 8), requires_grad=True)
rowwise_placement = [Shard(0)]
rowwise_tensor = DTensor.from_local(local_tensor, device_mesh, rowwise_placement)
# nn.init.zeros_(rowwise_tensor[0]) # NotImplementedError: Operator aten.select.int does not have a sharding strategy registered.
nn.init.zeros_(rowwise_tensor[:2]) # silent error: no zeros produced
# nn.init.zeros_(rowwise_tensor) # works
# nn.init.zeros_(rowwise_tensor[:8]) # works
print(rowwise_tensor)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | oncall: distributed,module: dtensor | low | Critical |
2,634,580,690 | transformers | Saving logs to file | ### Feature request
Add a `FileHandler` to root logger.
### Motivation
Transformers is only using StreamHandler for logging, which makes debug analysis hard across different experiments. Many other frameworks support both logging to terminal and to file.
For example: fairseq: https://github.com/facebookresearch/fairseq/blob/ecbf110e1eb43861214b05fa001eff584954f65a/fairseq_cli/train.py#L63-L65
detectron2:
https://github.com/facebookresearch/detectron2/blob/8d85329aed8506ea3672e3e208971345973ea761/detectron2/utils/logger.py#L108
### Your contribution
I can propose a PR. | Feature request | low | Critical |
2,634,590,824 | ollama | Expose DRY and XTC parameters | Llama.cpp has recently added support for the [DRY](https://github.com/oobabooga/text-generation-webui/pull/5677) and [XTC](https://github.com/oobabooga/text-generation-webui/pull/6335) sampling algorithms (https://github.com/ggerganov/llama.cpp/pull/9702 and https://github.com/ggerganov/llama.cpp/pull/9742, respectively).
DRY and XTC are widely used for creative tasks, and are recommended in the model cards of many popular finetunes.
It would be nice to be able to use DRY and XTC parameters when making API calls to Ollama. All the backend machinery is already present in llama.cpp, so presumably this would require little more than forwarding those parameters from the request to the engine.
| feature request | low | Major |
2,634,597,767 | godot | FileDialog/EditorFileDialog filter does not allow pressing Enter to enter a folder once it has been autoselected | ### Tested versions
- Reproducible in: 4.4.dev b3bcb2dc1
### System information
Godot v4.4.dev (b3bcb2dc1) - Fedora Linux 41 (KDE Plasma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 565.57.01) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
When you use FileDialog or EditorFileDialog with the **Open File**, **Open Files** or **Save** mode, you cannot select a folder as your chosen file. However, you can still enter it by pressing the same shortcut as you would have to select the folder. This doesn't work when relying on automatic selection from the file filter:
Here, I press <kbd>Enter</kbd> after entering something in the filter that automatically selects the first visible item:
https://github.com/user-attachments/assets/74870af6-3357-4167-8e95-1fde36798c68
If I manually focus the item and press <kbd>Enter</kbd>, the folder will be entered, but here, nothing happens. In **Save** mode, the dialog closes instead (likely because it tries to save in the current folder, even though the user's intent would be to enter the folder instead).
Note that I tested EditorFileDialog with https://github.com/godotengine/godot/pull/98022.
### Steps to reproduce
- Open an EditorFileDialog, e.g. by changing the project icon in the Project Settings.
- Press <kbd>Ctrl + F</kbd> to bring up the file filter and enter a folder's name.
- Press <kbd>Enter</kbd>.
### Minimal reproduction project (MRP)
[test_pr_98022.zip](https://github.com/user-attachments/files/17628207/test_pr_98022.zip)
| bug,topic:editor,usability,topic:gui | low | Minor |
2,634,602,673 | deno | Node: HTTP request fails on `res.complete` | Version: Deno 2.0.2 _(Still failing at 2.1.7)_
# Description
In `npm:@adyen/api-library` they use Node's native `http` to send POST requests and it always fails in Deno with the following error:
```
An error occurred during route handling or page rendering.
141 | res.on("end", () => {
142 | if (!res.complete) {
> 143 | reject(new Error("The connection was terminated while the message was still being sent"));
| ^
144 | }
145 | if (res.statusCode && (res.statusCode < 200 || res.statusCode >= 300)) {
146 | try {
Error: The connection was terminated while the message was still being sent
at IncomingMessageForClient.<anonymous> (file:///Users/vicary/Library/Caches/deno/npm/registry.npmjs.org/@adyen/api-library/20.0.0/lib/src/httpClient/httpURLConnectionClient.js:143:32)
at IncomingMessageForClient.emit (ext:deno_node/_events.mjs:393:28)
at endReadableNT (ext:deno_node/_stream.mjs:3210:16)
at processTicksAndRejections (ext:deno_node/_next_tick.ts:36:15)
at runNextTicks (ext:deno_node/_next_tick.ts:75:3)
at eventLoopTick (ext:core/01_core.js:182:21)
```
Their [source code](https://github.com/Adyen/adyen-node-api-library/blob/fd0f0f45a0d2c24ba30458f3d49fc2ae058cd049/src/httpClient/httpURLConnectionClient.ts#L151) is checking `res.complete` the same way as [Deno recommended](https://docs.deno.com/api/node/http/~/IncomingMessage.prototype.complete).
Running the same script in Node.js works without problems, is it a Node compatibility issue?
## Workarounds
Wrapping the callback of `res.on("end", () => { ... })` in a `queueMicrotask()` gives enough time for the request to finish.
Since we cannot override NPM modules using vendor, publishing a forked version may be required. | bug,node compat | low | Critical |
2,634,618,217 | vscode | Notebook Multicursor -- support for case sensitive and whole word toggles | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.1
- OS Version: Windows 11 Pro, Version 23H2
Steps to Reproduce:
Use Jupyter Extension Pack (All Microsoft)
1. Ctrl + F works fine with case sensitive and word boundaries.
2. Ctrl + D only defaults to non-case sensitive and no-word boundary behavior.
3. Also, the top left pop-up that used to come with Ctrl + D does not come anymore, where I could previously toggle this behavior.

Find behavior works just fine.

Ctrl + D does not work. It also selected ma*x* and 2*x*2. | feature-request,notebook-cell-editor | low | Critical |
2,634,649,590 | pytorch | [Distributed][collectiveCoalesced] Why function(check_gpu_tensors_same_device) only works in allreduce_coalesced? | ### ๐ Describe the bug

Function check_gpu_tensors_same_device only works in allreduce_coalesced. In both allgather_into_tensor_coalesced and reduce_scatter_tensor_coalesced, this function not appear. What is the reason?
Also, in this function, it checks scalar_type of each tensor in the tensor vector as shown in the picture below:

I don't understand the reason why the scalar_type of each tensor should be consistent in allreduce_coalesced while each allreduce is done independently on each tensor in the tensorlist.
### Versions
torch 2.3/2.4/2.5
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,634,680,485 | angular | Input setter coercion docs should mention input signals with value tranforms | ### Describe the problem that you experienced
The [Input setter coercion section of the Template type checking page](https://angular.dev/tools/cli/template-typecheck#input-setter-coercion) explains the typechecking problem of boolean attributes like `disabled`. However, the solutions it provides are outdated
- they first mention `ngAcceptInputType_` prefix
- then they mention that it is deprecated in favor of getter/setter pairs with correct types
However, they do not mention how to fix it with a [value transform on a signal input](https://angular.dev/guide/signals/inputs#value-transforms).
It would also be nice to mention the [booleanAttribute](https://angular.dev/api/core/booleanAttribute) helper somewhere. The Value transforms section basically repeats that helper without mentioning that you can just import it.
### Enter the URL of the topic with the problem
https://angular.dev/tools/cli/template-typecheck#input-setter-coercion
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
_No response_
### If the problem is browser-specific, please specify the device, OS, browser, and version
_No response_
### Provide any additional information here in as much as detail as you can
_No response_ | area: docs | low | Critical |
2,634,685,040 | vscode | TreeView: MaxCallStackError - Nesting | repo: https://github.com/RedCMD/TreeViewMaxCallStackError
<details><summary>expand siblings</summary>
```rust
ERR Maximum call stack size exceeded: RangeError: Maximum call stack size exceeded
at Roi.U (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:265:13925)
at Roi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:265:12904)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:263:45606
at Array.forEach (<anonymous>)
at sIi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:263:45593)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:7637
at D9.bufferEvents (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:30:3825)
at JAi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:7613)
at JAi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:72611)
at OY.value (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:3929)
```
</details>
<details><summary>collapse siblings</summary>
```rust
ERR Cannot read properties of undefined (reading 'dragStartDisposable'): TypeError: Cannot read properties of undefined (reading 'dragStartDisposable')
at Roi.U (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:265:13114)
at Roi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:265:12904)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:263:45606
at Array.forEach (<anonymous>)
at sIi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:263:45593)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:7637
at D9.bufferEvents (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:30:3825)
at JAi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:7613)
at JAi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:72611)
at OY.value (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:3929)
```
</details>
<details><summary>expand nesting</summary>
```rust
ERR Maximum call stack size exceeded: RangeError: Maximum call stack size exceeded
at Object.g [as map] (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:8:16252)
at RJi.M (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:37404)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:37427
at Object.g [as map] (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:8:16289)
at g.next (<anonymous>)
at Object.g [as map] (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:8:16276)
at g.next (<anonymous>)
at MAi.x (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:38775)
at MAi.x (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:38796)
at MAi.x (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:38796)
```
</details>
<details><summary>collapse nesting</summary>
```rust
ERR ListError [TreeView] Invalid index 130001: Error: ListError [TreeView] Invalid index 130001
at JAi.setFocus (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:9367)
at JAi.setFocus (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:72954)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:1473
at D9.bufferEvents (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:30:3825)
at Jb.setFocus (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:1339)
at KAi.v (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:71726)
at OY.value (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:25:6646)
at x.B (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:30:747)
at x.fire (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:30:965)
at OY.value (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:25:6291)
```
</details>
also notice:
after all nested nodes have rendered
scrolling down causes VSCode to freeze

2nd repo: install JSON TextMate [extension](https://marketplace.visualstudio.com/items?itemName=RedCMD.tmlanguage-syntax-highlighter)
Create CPP file with text:
```cpp
typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<typename::std::add_pointer<int>::type>::type>::type>::type>::type>::type>::type>::type>::type ptr{nullptr};
```
right click => `Show TextMate Calling Stack`
wait for the file to be parsed
wait even longer for VSCode to attempt to render the tree (notice memory usage going up)
<details><summary>Max Call Stack error</summary>
```rust
ERR Maximum call stack size exceeded: RangeError: Maximum call stack size exceeded
at Roi.U (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:265:13925)
at Roi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:265:12904)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:263:45606
at Array.forEach (<anonymous>)
at sIi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:263:45593)
at http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:7637
at D9.bufferEvents (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:30:3825)
at JAi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:308:7613)
at JAi.splice (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:416:72611)
at OY.value (http://localhost:3000/static/build/out/vs/workbench/workbench.web.main.internal.js:417:3929)
```
</details>
- VS Code Version: 1.95.1 (latest [insiders](https://insiders.vscode.dev/))
- OS Version: Windows 11
related: https://github.com/microsoft/vscode/issues/230361
assign @alexr00 ?
| bug,tree-widget | low | Critical |
2,634,686,738 | kubernetes | kubelet cause pod unready when stop kubelet, sleep 40s, start kubelet | ### What happened?
systemctl stop kubelet first,
sleep 40
systemctl start kubelet
get pod -w will found pod become unready, and become ready immediately
### What did you expect to happen?
pod should no change
### How can we reproduce it (as minimally and precisely as possible)?
systemctl stop kubelet first,
sleep 40
systemctl start kubelet
### Anything else we need to know?

### Kubernetes version
1.24.17
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/support,sig/node,needs-triage | low | Major |
2,634,743,928 | godot | Incorrect value loaded from .tres file for instances of custom classes. | ### Tested versions
v4.3.stable.arch_linux
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Fri, 01 Nov 2024 03:30:41 +0000 - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3090 (nvidia; 565.57.01) - AMD Ryzen 7 5800X 8-Core Processor (16 Threads)
### Issue description
I initially raised this issue here: https://github.com/derkork/godot-resource-groups/issues/13 as I thought it must have been something caused by the plugin when I failed to reproduce it directly using `ResourceLoader`. I since tried recreating the problem in a clean project using the plugin and failed to reproduce and upon further troubleshooting I believe again that it is an engine issue.
I won't include as much context here as some of it isn't relevant, but you can view the initial issue for more detailed context.
Essentially I have the following custom resource types, formatted as `Class -> ParentClass`
`Item -> Resource` This class is also fairly lightweight. It is used to wrap different data types representing things that are stored in an inventory. The two main types are `Spell` and `SpellMod`.
`SpellItem -> Item` This is an item with a `spell: PackedScene` property that holds the code / data relevant to actually creating an instance of a spell.
`SpellModItem -> Item` This is an item with a `spell_mod: SpellMod` property.
`SpellMod -> Resource` This is a pretty lightweight abstract class that describes some data with a `mod(spell: Spell)` method.
`FlightPathMod -> SpellMod` An example of a SpellMod subtype that actually has an implementation and some additional data. In this case it will change the flight path of a projectile (eg a fireball). All `FlightPathMod` instances will share the same script, and different mods are created by changing the other data stored in this resource.
I have two different FlightPathMod resources: `wave_flight_mod.tres` and `zig_zag_flight_mod.tres` with different data for calculating the new flight path saved as external resources (using built in resource types, also saved in their own `.tres` files and loaded through the editor when creating these resource files)
I then also have a SpellModItem resource for each of these mods that keeps a reference to them (set up through the inspector by loading into @export vars as above) as well as some item specific metadata.
The issue is that once I save these SpellModItem resources and reload the project, the editor seems to load them incorrectly. If I open `wave_flight_mod.tres` by double clicking it in the editor's File System window, it will display the data in the inspector, but with the `spell_mod` property incorrectly being the resource at `res://.../zig_zag_flight_mod.tres`. When I view `wave_flight_mod.tres` in a text editor, the path appears to be `res.../wave_flight_mod.tres` as expected. You can see in the screenshot below how the same file is being viewed in the inspector and vscode yet the values are different.

I can try resetting the values and saving again, but the problem reappears when I reload the editor. If I don't reload the project and attempt to run the game, the data looks correct in memory (viewed in the debugger) when first loaded from disk, but once the data is copied into another var the reference breaks. I am not sure if this is happening when the second mod gets loaded. Stepping through the debugger line-by-line makes it seem like that is not the case but they are loaded asynchronously so I suspect the tools are being a bit misleading here.
### Steps to reproduce
I outlined the steps I take to reproduce the issue within my project above, but I have not been able to successfully reproduce it in a fresh project, even after a few hours of trying. If anyone has suggestions on what might be causing issues I am happy to try updating my attempt at reproducing the issue to help create an example.
I understand this makes it difficult to action this bug. I am mostly hoping that someone can at least give me some advice on what to try to help discover the root cause.
### Minimal reproduction project (MRP)
See above. | bug,topic:core | low | Critical |
2,634,763,165 | ui | [feat]: Create new popup button component https://www.easyui.pro/component | ### Feature description
Describe your feature request... Kindly add this type of component

https://www.easyui.pro/component
### Affected component/components
https://www.easyui.pro/component
### Additional Context
Additional details here... Example: https://www.easyui.pro/component
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,634,781,645 | kubernetes | Document the Job positive and negative criteria evaluation orders | ### What would you like to be added?
I would like to document the Job positive and negative criteria evaluation orders in the following 3 levels:
- [x] [JobController (KubeControllerManager)](https://github.com/kubernetes/kubernetes/blob/bc79d3ba87b8b3c4b7c68f26cdfcaa35654d96ac/pkg/controller/job/job_controller.go) Comments: https://github.com/kubernetes/kubernetes/pull/128569
- [ ] [Job API](https://github.com/kubernetes/kubernetes/blob/bc79d3ba87b8b3c4b7c68f26cdfcaa35654d96ac/pkg/apis/batch/types.go) Comments
- [ ] [Job API Documents](https://kubernetes.io/docs/concepts/workloads/controllers/job/)
/sig apps
### Why is this needed?
As reported in https://github.com/kubernetes/kubernetes/issues/117303 and discussed in https://github.com/kubernetes/kubernetes/pull/121863#issuecomment-2447872050, when the Job meets both the positive (`.spec.completions`, `.spec.successPolicy`, and some else) and negative (`.spec.backoffLimit`, `.spec.activeDeadlineSeconds`, and some else) criteria at the same time, the negative ones are respected, then the Job status looks weird. However, the evaluation orders and status are Job specifications, not any bugs.
It would be better to document the criteria evaluation steps to clarify this behavior. Additionally, users should rely on the `.status.completionTime` instead of `.status.succeeded` to verify if the Job has been succeeded since the `.status.succeeded` is not a terminal state, that is just intermediate state for the JobController.
| kind/documentation,kind/feature,sig/apps,priority/important-longterm,sig/docs,triage/accepted | low | Critical |
2,634,825,761 | deno | Skip localhost when evaluating HSTS upgrades | Not sure if applicable, but just in case. See https://github.com/whatwg/fetch/pull/1781 for the change to Fetch. | web | low | Minor |
2,634,832,037 | kubernetes | Failure cluster [44fa5cdb...]: TestFrontProxyConfig/WithoutUID: | ### Failure cluster [44fa5cdb3ee6c1ca81f2](https://go.k8s.io/triage#44fa5cdb3ee6c1ca81f2)
##### Error text:
```
Failed;Failed;
=== RUN TestFrontProxyConfig/WithoutUID
testserver.go:581: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
testserver.go:401: runtime-config=map[api/all:true]
testserver.go:402: Starting kube-apiserver on port 42541...
...
apiserver_test.go:350: expected UID: "24d7c521-dceb-47e4-aa29-ceabe7c6320a", got: ""
apiserver_test.go:353: expected name: "system:serviceaccount:integration-test-front-proxy-config:wardle-client-sa", got: "system:kube-aggregator"
apiserver_test.go:356: expected groups: [system:serviceaccounts system:serviceaccounts:integration-test-front-proxy-config system:authenticated], got: [system:authenticated]
apiserver_test.go:359: expected extra to be map[authentication.kubernetes.io/credential-id:[JTI=97c7bbc2-7a1b-4e58-9f10-ac3d780144ba]], but got map[]
apiserver_test.go:385: the request is in fact not being tested
```
#### Recent failures:
[10/28/2024, 12:52:08 AM ci-kubernetes-integration-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1850686612854280192)
[10/23/2024, 12:47:59 PM ci-kubernetes-integration-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1849039745846349824)
/kind failing-test
/kind flake
/sig api-machinery
| sig/api-machinery,kind/flake,kind/failing-test,triage/accepted | low | Critical |
2,634,833,164 | ant-design | TreeSelect ้จๅๅฑๆงๅๅ่ฝๆน่ฟ | ### What problem does this feature solve?
TreeSelect Propsไธญๆไธไธช treeTitleRender ็ ts ็ฑปๅๆ ๆณจ้่ฏฏ๏ผๅฆไธๅพๆ็คบ๏ผ
<img width="1442" alt="image" src="https://github.com/user-attachments/assets/d38d6c9c-9fa5-46d7-9ca2-b6be33a9b3a9">
ๆไธช้ฎ้ขๅผๅพๆไปฌๆ่ไธไธ๏ผSelect็ปไปถไธญ๏ผ้้กนOption็ฑปๅๆจๅด็ๆฏ labelใvalue... ๏ผTreeๅTreeSelectไธญ็ๅบไนๆฏ labelใvalue... ่ไธๆฏ titleใvalue...
`treeTitleRender` ่ฟไธชๅฝๅไนไธๅ็๏ผ้พ้ไธๅบ่ฏฅๆฏ `treeNodeRender` ๅ
TreeSelect ็ปไปถๅฐไธไธชๅพๅฎ็จ็ๅ่ฝ๏ผSelect็ปไปถไธญๆ๏ผๅฐฑๆฏ `labelRender`๏ผ่ฟไผ็ป่ฟ็จๆ็ดข่้ไธญ็ๆฐๆฎๅจๅๆพ็ๆถๅๆไธไธชๆบไผๆฅ่งฃๆ้ฆๆฌกๅ ่ฝฝ็ๆฐๆฎไธญไธๅญๅจๅฏนๅบ้้กนๆnode๏ผ่ไธๆฏไธๅญๅจๆถๆพ็คบไธไธช value๏ผ
<img width="678" alt="image" src="https://github.com/user-attachments/assets/30aa9d03-b665-4804-acad-dbd533483f72">
### What does the proposed API look like?
็ๆณๆ
ๅตไธ `treeTitleRender` ๆนๅไธบ `treeNodeRender`๏ผไฟฎๅคๅฏนๅบ็TS็ฑปๅ้่ฏฏ
ๆฐๅข `labelRender` ่งฃๅณๅจๅทฒๆๆฐๆฎไธญๆพไธๅฐๅฏนๅบ็้กนๆถๆพ็คบ value ็้ฎ้ข๏ผๅๆถไนๅฏไปฅ่งฃๅณ่ชๅฎไนๆธฒๆ้ไธญ้กน็ๆพ็คบ็้ๆฑ๏ผๅฆๆๆ็่ฏ๏ผ
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Major |
2,634,856,482 | next.js | SearchParams Sync Issue During First Effect Execution with window.history.replaceState() | ### Link to the code that reproduces this issue
https://github.com/yongholeeme/reproduction-bug-history-api-in-effect-nextjs/blob/main/Bug.jsx
### To Reproduce
> I created and deployed reproduction: https://reproduction-bug-history-api-in-effect-nextjs.vercel.app
> 1. When query `foo` is over 5, call [`window.history.replaceState()`](https://nextjs.org/docs/app/building-your-application/routing/linking-and-navigating#using-the-native-history-api) in effect
> 2. When `searchParams` is changed, call `console.log()` to watch `searchParams`
> 3. When clicked the button, query `foo` is updated by random number (0 ~ 9)
1. When connect https://reproduction-bug-history-api-in-effect-nextjs.vercel.app?foo=100, `foo`'s value has to be changed `bar` according to `1. When query `foo` is over 5, call window.history.replaceState() in effect`
2. It seems to be changed to `?foo=bar`, but `console.log()` don't be executed in useEffect has `searchParams` dependency.
3. Even after calling `router.refresh()`, `?foo=100` query is back.
https://github.com/user-attachments/assets/593b9697-e790-4cea-a877-1e824671a6a2
### Current vs. Expected behavior
Wherever `window.history.replaceState()` is called, it's be synchronized with Router.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:37:13 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T8112
Available memory (MB): 24576
Available CPU cores: 8
Binaries:
Node: 20.13.1
npm: 10.5.2
Yarn: N/A
pnpm: 9.1.1
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: N/A
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed)
### Additional context
_No response_ | bug,Navigation | low | Critical |
2,634,882,854 | ant-design | Dropdown ไธๆ่ๅ็ปไปถitemๆฐ้่ฟๅคๆถๆต่งๅจ็ชๅฃไผ้ฎๆกUI็ไธๅฐๅฎๆด่ๅ | ### Reproduction link
[](https://codesandbox.io/p/sandbox/rqg8nf)
### Steps to reproduce
็นๅป ๅ่กจไธญ็ โๆดๅคโ ๆ้ฎๆพ็คบไธๆ่ๅใ
### What is expected?
่ฝๅคๅฎๆดๆพ็คบไธๆ่ๅๅ
ๅฎน๏ผไธ่ฆๆๅผ้กต้ข
### What is actually happening?
ๅฝไธๆ่ๅๆพ็คบๅจโๆดๅคโๆ้ฎไธ้ขๆถ๏ผไผๆๅผ้กต้ขๅธฆๆๆปๅจๆก๏ผ้ ๆ้กต้ขๅบ้จๆดๅค็็ฉบ็ฝใ

ๅฝๆพ็คบๅจโๆดๅคโๆ้ฎไธ้ขๆถ๏ผไธๆ่ๅๆ้จๅ่ขซๆต่งๅจ็ชๅฃ้ฎๆกๆใ

| Environment | Info |
| --- | --- |
| antd | 5.21.6 |
| React | 18.3.1 |
| System | MacOS 10.15.7 |
| Browser | Chrome 128, Firefox 131 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Major |
2,634,885,622 | langchain | Bug in langchain_community/callbacks/tracers/wandb.py: 'NoneType' object has no attribute 'items' | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
python code
```python
import os
import wandb
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain_openai import OpenAI
from contextvars import ContextVar
from typing import (
Optional,
)
from langchain_core.tracers.context import register_configure_hook
from langchain_community.callbacks.tracers.wandb import WandbTracer
os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
os.environ["WANDB_PROJECT"] = 'set your wand project'
wandb_tracing_callback_var: ContextVar[Optional[WandbTracer]] = ContextVar(
"tracing_wandb_callback", default=None
)
register_configure_hook(
wandb_tracing_callback_var, True, WandbTracer, "LANGCHAIN_WANDB_TRACING"
)
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False
)
agent.run("What is 2 raised to .123243 power?")
wandb.finish()
```
### Error Message and Stack Trace (if applicable)
`wandb: ERROR WARNING: Failed to serialize model: 'NoneType' object has no attribute 'items'`
### Description
I'm trying to use langchain wandb lib and there is a bug in the model:
`langchain_community/callbacks/tracers/wandb.py` Line: 202 (version langchain-community@0.3.3)
```python
# def transform_run(run: Dict[str, Any]) -> Dict[str, Any]:
# ...
# transformed_dict = transform_serialized(run)
# serialized = transformed_dict.pop("serialized")
for k, v in serialized.items():
# transformed_dict[k] = v
# ...
```
where the object `serialized` is None!
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.3.3
> langsmith: 0.1.132
> langchain_google_vertexai: 2.0.7
> langchain_openai: 0.2.5
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> google-cloud-aiplatform: 1.70.0
> google-cloud-storage: 2.18.2
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.54.0
> orjson: 3.10.5
> packaging: 23.2
> pydantic: 2.7.4
> pydantic-settings: 2.5.2
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.30
> tenacity: 8.4.1
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,634,915,043 | vscode | Preview tab history navigation | Hi, is there a way to navigate the history stack of a preview tab? When using a preview tab, I'd often click through multiple files to gain context of some code flow, but I'm unsure which file I actually want to keep open. Eventually, I realize it's one of the files earlier in that history stack that I actually care about + want to keep open, but I can't quite remember what the file name/location was. It would be great then to have a way to go back through the preview history stack.
e.g.
Given files opened in a preview tab: a.js -> b.js -> c.js
I can go back twice in that tab somehow to land on a.js again. Likewise, I should be able to go forward as well.
Note:
Just trying to reopen the feature reqest that was closed due to not enough votes. The related feature request, https://github.com/microsoft/vscode/issues/166194 | feature-request,workbench-history | low | Minor |
2,634,918,596 | vscode | Extensions editor - unexpected drop shadow | 
| polish,extensions-editor | low | Minor |
2,634,923,178 | deno | Deno issue with fflate workers | Version: Deno 2.0.4.
Hi!
Here is the copy of the bug report I sent on fflate issues tracker (https://github.com/101arrowz/fflate/issues/226).
The maintainer is actually thinking the error I am facing is bound to deno runtime.
Have you any idea on how to solve this?
P.S. Actually it worked on my recent test with 151KB but not working with 1mb. (It doesn't change anything to the issue, since it's bound to the algorithm fflate uses to decide either it should use a worker or not)
---
<!-- This template is just a suggestion, feel free to ignore or delete it -->
Hi!
I am trying to use the async version of `zip` function, using `deno` runtime.
Unfortunately, my code returns an error when trying to upload a file higher than 150kb (but works perfectly fine when using files lower in size).
Here are the test files:
[fakefile_150KB.txt](https://github.com/user-attachments/files/17611371/fakefile_150KB.txt)
[fakefile_151KB.txt](https://github.com/user-attachments/files/17611375/fakefile_151KB.txt)
Have you any clue on how to solve this issue?
Thanks.
**How to reproduce**
The code used:
```typescript
import * as fs from "node:fs";
import mime from "mime";
import * as fflate from "fflate";
const filePath = "./bench/fakefile_151KB.txt";
const file = new File([fs.readFileSync(filePath)], filePath, {
type: mime.getType(filePath) as string,
});
const fileBytes = await file.bytes();
fflate.zip({
[file.name]: fileBytes,
}, (err, _data) => {
if (err) console.error(err);
});
```
**The problem**
```console
error: Uncaught (in worker "[worker eval]") SyntaxError: Unexpected end of input
Warning Couldn't format source line: Column 149 is out of bounds (source may have changed at runtime)
at <anonymous> (data:text/javascript,;u8=Uint8Array;u16=U......u16 "map": index -> :1:149)
[Object: null prototype] {
message: "Uncaught SyntaxError: Unexpected end of input",
fileName: 'data:text/javascript,;u8=Uint8Array;u16=Uint16Array;i32=Int32Array;hMap=function (cd, mb, r) { var s = cd.length; // index var i = 0; // u16 "map": index -> #%20of%20codes%20with%20bit%20length%20=%20index%20%20%20%20var%20l%20=%20new%20u16(mb);%20%20%20%20//%20length%20of%20cd%20must%20be%20288%20(total%20#%20of%20codes)%20%20%20%20for%20(;%20i%20%3C%20s;%20++i)%20{%20%20%20%20%20%20%20%20if%20(cd[i])%20%20%20%20%20%20%20%20%20%20%20%20++l[cd[i]%20-%201];%20%20%20%20}%20%20%20%20//%20u16%20%22map%22:%20index%20-%3E%20minimum%20code%20for%20bit%20length%20=%20index%20%20%20%20var%20le%20=%20new%20u16(mb);%20%20%20%20for%20(i%20=%201;%20i%20%3C%20mb;%20++i)%20{%20%20%20%20%20%20%20%20le[i]%20=%20(le[i%20-%201]%20+%20l[i%20-%201])%20%3C%3C%201;%20%20%20%20}%20%20%20%20var%20co;%20%20%20%20if%20(r)%20{%20%20%20%20%20%20%20%20//%20u16%20%22map%22:%20index%20-%3E%20number%20of%20actual%20bits,%20symbol%20for%20code%20%20%20%20%20%20%20%20co%20=%20new%20u16(1%20%3C%3C%20mb);%20%20%20%20%20%20%20%20//%20bits%20to%20remove%20for%20reverser%20%20%20%20%20%20%20%20var%20rvb%20=%2015%20-%20mb;%20%20%20%20%20%20%20%20for%20(i%20=%200;%20i%20%3C%20s;%20++i)%20{%20%20%20%20%20%20%20%20%20%20%20%20//%20ignore%200%20lengths%20%20%20%20%20%20%20%20%20%20%20%20if%20(cd[i])%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20//%20num%20encoding%20both%20symbol%20and%20bits%20read%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20sv%20=%20(i%20%3C%3C%204)%20|%20cd[i];%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20//%20free%20bits%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20r_1%20=%20mb%20-%20cd[i];%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20//%20start%20value%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20v%20=%20le[cd[i]%20-%201]++%20%3C%3C%20r_1;%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20//%20m%20is%20end%20value%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20for%20(var%20m%20=%20v%20|%20((1%20%3C%3C%20r_1)%20-%201);%20v%20%3C=%20m;%20++v)%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20//%20every%2016%20bit%20value%20starting%20with%20the%20code%20yields%20the%20same%20result%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20co[rev[v]%20%3E%3E%20rvb]%20=%20sv;%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20}%20%20%20%20}%20%20%20%20else%20{%20%20%20%20%20%20%20%20co%20=%20new%20u16(s);%20%20%20%20%20%20%20%20for%20(i%20=%200;%20i%20%3C%20s;%20++i)%20{%20%20%20%20%20%20%20%20%20%20%20%20if%20(cd[i])%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20co[i]%20=%20rev[le[cd[i]%20-%201]++]%20%3E%3E%20(15%20-%20cd[i]);%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20}%20%20%20%20}%20%20%20%20return%20co;};wbits=function%20(d,%20p,%20v)%20{%20%20%20%20v%20%3C%3C=%20p%20&%207;%20%20%20%20var%20o%20=%20(p%20/%208)%20|%200;%20%20%20%20d[o]%20|=%20v;%20%20%20%20d[o%20+%201]%20|=%20v%20%3E%3E%208;};wbits16=function%20(d,%20p,%20v)%20{%20%20%20%20v%20%3C%3C=%20p%20&%207;%20%20%20%20var%20o%20=%20(p%20/%208)%20|%200;%20%20%20%20d[o]%20|=%20v;%20%20%20%20d[o%20+%201]%20|=%20v%20%3E%3E%208;%20%20%20%20d[o%20+%202]%20|=%20v%20%3E%3E%2016;};hTree=function%20(d,%20mb)%20{%20%20%20%20//%20Need%20extra%20info%20to%20make%20a%20tree%20%20%20%20var%20t%20=%20[];%20%20%20%20for%20(var%20i%20=%200;%20i%20%3C%20d.length;%20++i)%20{%20%20%20%20%20%20%20%20if%20(d[i])%20%20%20%20%20%20%20%20%20%20%20%20t.push({%20s:%20i,%20f:%20d[i]%20});%20%20%20%20}%20%20%20%20var%20s%20=%20t.length;%20%20%20%20var%20t2%20=%20t.slice();%20%20%20%20if%20(!s)%20%20%20%20%20%20%20%20return%20{%20t:%20et,%20l:%200%20};%20%20%20%20if%20(s%20==%201)%20{%20%20%20%20%20%20%20%20var%20v%20=%20new%20u8(t[0].s%20+%201);%20%20%20%20%20%20%20%20v[t[0].s]%20=%201;%20%20%20%20%20%20%20%20return%20{%20t:%20v,%20l:%201%20};%20%20%20%20}%20%20%20%20t.sort(function%20(a,%20b)%20{%20return%20a.f%20-%20b.f;%20});%20%20%20%20//%20after%20i2%20reaches%20last%20ind,%20will%20be%20stopped%20%20%20%20//%20freq%20must%20be%20greater%20than%20largest%20possible%20number%20of%20symbols%20%20%20%20t.push({%20s:%20-1,%20f:%2025001%20});%20%20%20%20var%20l%20=%20t[0],%20r%20=%20t[1],%20i0%20=%200,%20i1%20=%201,%20i2%20=%202;%20%20%20%20t[0]%20=%20{%20s:%20-1,%20f:%20l.f%20+%20r.f,%20l:%20l,%20r:%20r%20};%20%20%20%20//%20efficient%20algorithm%20from%20UZIP.js%20%20%20%20//%20i0%20is%20lookbehind,%20i2%20is%20lookahead%20-%20after%20processing%20two%20low-freq%20%20%20%20//%20symbols%20that%20combined%20have%20high%20freq,%20will%20start%20processing%20i2%20(high-freq,%20%20%20%20//%20non-composite)%20symbols%20instead%20%20%20%20//%20see%20https://reddit.com/r/photopea/comments/ikekht/uzipjs_questions/%20%20%20%20while%20(i1%20!=%20s%20-%201)%20{%20%20%20%20%20%20%20%20l%20=%20t[t[i0].f%20%3C%20t[i2].f%20?%20i0++%20:%20i2++];%20%20%20%20%20%20%20%20r%20=%20t[i0%20!=%20i1%20&&%20t[i0].f%20%3C%20t[i2].f%20?%20i0++%20:%20i2++];%20%20%20%20%20%20%20%20t[i1++]%20=%20{%20s:%20-1,%20f:%20l.f%20+%20r.f,%20l:%20l,%20r:%20r%20};%20%20%20%20}%20%20%20%20var%20maxSym%20=%20t2[0].s;%20%20%20%20for%20(var%20i%20=%201;%20i%20%3C%20s;%20++i)%20{%20%20%20%20%20%20%20%20if%20(t2[i].s%20%3E%20maxSym)%20%20%20%20%20%20%20%20%20%20%20%20maxSym%20=%20t2[i].s;%20%20%20%20}%20%20%20%20//%20code%20lengths%20%20%20%20var%20tr%20=%20new%20u16(maxSym%20+%201);%20%20%20%20//%20max%20bits%20in%20tree%20%20%20%20var%20mbt%20=%20ln(t[i1%20-%201],%20tr,%200);%20%20%20%20if%20(mbt%20%3E%20mb)%20{%20%20%20%20%20%20%20%20//%20more%20algorithms%20from%20UZIP.js%20%20%20%20%20%20%20%20//%20TODO:%20find%20out%20how%20this%20code%20works%20(debt)%20%20%20%20%20%20%20%20//%20%20ind%20%20%20%20debt%20%20%20%20%20%20%20%20var%20i%20=%200,%20dt%20=%200;%20%20%20%20%20%20%20%20//%20%20%20%20left%20%20%20%20%20%20%20%20%20%20%20%20cost%20%20%20%20%20%20%20%20var%20lft%20=%20mbt%20-%20mb,%20cst%20=%201%20%3C%3C%20lft;%20%20%20%20%20%20%20%20t2.sort(function%20(a,%20b)%20{%20return%20tr[b.s]%20-%20tr[a.s]%20||%20a.f%20-%20b.f;%20});%20%20%20%20%20%20%20%20for%20(;%20i%20%3C%20s;%20++i)%20{%20%20%20%20%20%20%20%20%20%20%20%20var%20i2_1%20=%20t2[i].s;%20%20%20%20%20%20%20%20%20%20%20%20if%20(tr[i2_1]%20%3E%20mb)%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20dt%20+=%20cst%20-%20(1%20%3C%3C%20(mbt%20-%20tr[i2_1]));%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20tr[i2_1]%20=%20mb;%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20%20%20%20%20else%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20break;%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20dt%20%3E%3E=%20lft;%20%20%20%20%20%20%20%20while%20(dt%20%3E%200)%20{%20%20%20%20%20%20%20%20%20%20%20%20var%20i2_2%20=%20t2[i].s;%20%20%20%20%20%20%20%20%20%20%20%20if%20(tr[i2_2]%20%3C%20mb)%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20dt%20-=%201%20%3C%3C%20(mb%20-%20tr[i2_2]++%20-%201);%20%20%20%20%20%20%20%20%20%20%20%20else%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20++i;%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20for%20(;%20i%20%3E=%200%20&&%20dt;%20--i)%20{%20%20%20%20%20%20%20%20%20%20%20%20var%20i2_3%20=%20t2[i].s;%20%20%20%20%20%20%20%20%20%20%20%20if%20(tr[i2_3]%20==%20mb)%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--tr[i2_3];%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20++dt;%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20mbt%20=%20mb;%20%20%20%20}%20%20%20%20return%20{%20t:%20new%20u8(tr),%20l:%20mbt%20};};ln=function%20(n,%20l,%20d)%20{%20%20%20%20return%20n.s%20==%20-1%20%20%20%20%20%20%20%20?%20Math.max(ln(n.l,%20l,%20d%20+%201),%20ln(n.r,%20l,%20d%20+%201))%20%20%20%20%20%20%20%20:%20(l[n.s]%20=%20d);};lc=function%20(c)%20{%20%20%20%20var%20s%20=%20c.length;%20%20%20%20//%20Note%20that%20the%20semicolon%20was%20intentional%20%20%20%20while%20(s%20&&%20!c[--s])%20%20%20%20%20%20%20%20;%20%20%20%20var%20cl%20=%20new%20u16(++s);%20%20%20%20//%20%20ind%20%20%20%20%20%20num%20%20%20%20%20%20%20%20%20streak%20%20%20%20var%20cli%20=%200,%20cln%20=%20c[0],%20cls%20=%201;%20%20%20%20var%20w%20=%20function%20(v)%20{%20cl[cli++]%20=%20v;%20};%20%20%20%20for%20(var%20i%20=%201;%20i%20%3C=%20s;%20++i)%20{%20%20%20%20%20%20%20%20if%20(c[i]%20==%20cln%20&&%20i%20!=%20s)%20%20%20%20%20%20%20%20%20%20%20%20++cls;%20%20%20%20%20%20%20%20else%20{%20%20%20%20%20%20%20%20%20%20%20%20if%20(!cln%20&&%20cls%20%3E%202)%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20for%20(;%20cls%20%3E%20138;%20cls%20-=%20138)%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20w(32754);%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20if%20(cls%20%3E%202)%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20w(cls%20%3E%2010%20?%20((cls%20-%2011)%20%3C%3C%205)%20|%2028690%20:%20((cls%20-%203)%20%3C%3C%205)%20|%2012305);%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20cls%20=%200;%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20%20%20%20%20else%20if%20(cls%20%3E%203)%20{%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20w(cln),%20--cls;%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20for%20(;%20cls%20%3E%206;%20cls%20-=%206)%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20w(8304);%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20if%20(cls%20%3E%202)%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20w(((cls%20-%203)%20%3C%3C%205)%20|%208208),%20cls%20=%200;%20%20%20%20%20%20%20%20%20%20%20%20}%20%20%20%20%20%20%20%20%20%20%20%20while%20(cls--)%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20w(cln);%20%20%20%20%20%20%20%20%20%20%20%20cls%20=%201;%20%20%20%20%20%20%20%20%20%20%20%20cln%20=%20c[i];%20%20%20%20%20%20%20%20}%20%20%20%20}%20%20%20%20return%20{%20c:%20cl.subarray(0,%20cli),%20n:%20s%20};};clen=function%20(cf,%20cl)%20{%20%20%20%20var%20l%20=%200;%20%20%20%20for%20(var%20i%20=%200;%20i%20%3C%20cl.length;%20++i)%20%20%20%20%20%20%20%20l%20+=%20cf[i]%20*%20cl[i];%20%20%20%20return%20l;};wfblk=function%20(out,%20pos,%20da'... 17699 more characters,
lineNumber: 1,
columnNumber: 149
}
```
I am on Linux Pop!_OS, using deno 2.0.4 runtime.
fflate is in version 0.8.2.
| bug,node compat,node API,triage required ๐ | low | Critical |
2,634,939,210 | pytorch | cmake libtorch_jni.so, bug: 'fp16/psimd.h' file not found | ### ๐ Describe the bug
Hello, since the AAR package in the repository is not compatible with the current version of the NDK, I'm compiling libtorch_jni.so and encountered the following error. It's confirmed that there is no such header file under the fp subdirectory. What might be causing this, and how can it be resolved? Thank you.
bash ./scripts/build_pytorch_android.sh
pytorch/third_party/NNPACK/src/psimd/blas/shdotxf.c:5:10: fatal error: 'fp16/psimd.h' file not found
#include <fp16/psimd.h>
^~~~~~~~~~~~~~
find third_party/FP/include/fp16/*, not find psimd.h , thank you.
### Versions
python
cc @seemethere @malfet @osalpekar @atalman | needs reproduction,module: binaries,oncall: mobile | low | Critical |
2,634,952,955 | PowerToys | Keyboard Manager Shortcut Mapping works only once | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Try to use a new mapped shortcut twice
### โ๏ธ Expected Behavior
Shortcut should function every time
### โ Actual Behavior
[PowerToysReport_2024-11-05-10-40-27.zip](https://github.com/user-attachments/files/17630395/PowerToysReport_2024-11-05-10-40-27.zip)
The Keyboard Manager shortcut work only once...
When try to use it a second time it will not function anymore until killing power toys and restarting the app.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Minor |
2,634,987,357 | neovim | for quickfix, `vim.api.nvim_buf_attach()` `on_bytes` doesn't get called on `:caddexpr` | ### Problem
I'm trying to use a plugin to colorize ANSI escape sequences in the quickfix window. The plugin needs to get a callback when the quickfix window updates. However `vim.api.nvim_buf_attach()` `on_bytes` doesn't get called on `:caddexpr`.
(Plugin issue at <https://github.com/m00qek/baleia.nvim/issues/27>).
### Steps to reproduce
The `minimal.lua`:
```lua
vim.cmd('copen')
for _, buffer in ipairs(vim.api.nvim_list_bufs()) do
if vim.api.nvim_get_option_value('filetype', { buf = buffer }) == 'qf' then
vim.api.nvim_buf_attach(buffer, false, {
on_bytes = function()
print('on_bytes')
end,
})
end
end
```
- First run `/usr/bin/nvim --clean -u minimal.lua`.
- Inside *Neovim*, run `:caddexpr "foo"`.
### Expected behavior
The `on_bytes` callback runs.
### Nvim version (nvim -v)
0.10.2
### Vim (not Nvim) behaves the same?
not relevant
### Operating system/version
Arch Linux kernel 6.11.6
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
wezterm
### Installation
system package manager | bug,api,quickfix | low | Minor |
2,634,991,892 | flutter | [a11y][VoiceOver/Talkback] Dismissable ModalBarrier is not announced as a button | ### Steps to reproduce
1. Have an app with a popupMenu
2. Enable VoiceOver / Talkback
3. Swipe until you get Dismiss Menu semantic
### Expected results
Element should be labelled as button by VoiceOver/Talkback as it has an onTap action.
### Actual results
Element is not labelled as button
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_localizations/flutter_localizations.dart';
void main() {
runApp(MaterialApp(
title: 'Flutter Demo',
localizationsDelegates: const [
GlobalMaterialLocalizations.delegate,
GlobalWidgetsLocalizations.delegate,
GlobalCupertinoLocalizations.delegate,
],
debugShowCheckedModeBanner: false,
supportedLocales: <Locale>[
Locale('en', 'US'), // English
],
home: MyApp(),
));
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: TextButton(
child: Text('Hello, World!'),
onPressed: () {
showMenu<String>(
shape: const RoundedRectangleBorder(borderRadius: BorderRadius.all(Radius.circular(16.0))),
context: context,
items: [PopupMenuItem(child: Text('Menu1')), PopupMenuItem(child: Text('Menu2'))],
position: RelativeRect.fromLTRB(
0,
0,
0,
0,
),
);
},
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/7c9983ab-13eb-4886-9733-9209744b3474
<img width="1115" alt="Capture dโeฬcran 2024-11-05 aฬ 13 55 01" src="https://github.com/user-attachments/assets/3def05d3-35d8-4401-b9dc-514881568877">
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.19.3, on macOS 14.6.1 23G93 darwin-arm64, locale fr-FR)
โข Flutter version 3.19.3 on channel stable at /Users/jeremy_bastien/fvm/versions/3.19.3
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision ba39319843 (8 months ago), 2024-03-07 15:22:21 -0600
โข Engine revision 2e4ba9c6fb
โข Dart version 3.3.1
โข DevTools version 2.31.1
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/jeremy_bastien/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข ANDROID_HOME = /Users/jeremy_bastien/Library/Android/sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode 15/Xcode.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข android-studio-dir = /Applications/Android Studio.app
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] IntelliJ IDEA Ultimate Edition (version 2024.2.4)
โข IntelliJ at /Applications/IntelliJ IDEA.app
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
[โ] IntelliJ IDEA Ultimate Edition (version 2024.1.5)
โข IntelliJ at /Users/jeremy_bastien/Applications/IntelliJ IDEA Ultimate.app
โข Flutter plugin version 81.0.2
โข Dart plugin version 241.18968.26
[โ] Connected device (2 available)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 130.0.6723.92
[โ] Network resources
โข All expected network resources are available.
```
</details>
| framework,a: accessibility,has reproducible steps,P3,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | low | Critical |
2,635,029,716 | PowerToys | Keyboard manager: A bunch of remapped shortcuts don't work anymore. | ### Microsoft PowerToys version
v0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
For at least a few months now I'm unable use most of my remappings from this list:

Only the one that produces "<" works as expected. The one that should give ">" works the wrong way, instead of me pushing "Ctrl + ." for it (that does nothing) I can get it via clicking "Ctrl + Shift (Left) + ," ... The rest don't work at all but they all used to work when I created them an year or so ago and at least few times in between. I haven't changed my keyboard and the language settings have remained the same as well (Only English and Estonian languages but both have Estonian keyboard mapped). So what has changed and what should I do? Remapping or switching between Ctrl and Ctrl (Left) or Alt and Alt (Left) have not helped.
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,635,099,738 | vscode | Feature Request: Support shell interation for terminal inside tmux | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Current, shell integration does not fully work, for bash shell running inside tmux,
that current working directory detection not working,
because `\e]633;P; \a` terminal escape sequence is not recognized by tmux.
Note that terminal inside tmux, need to enable VSCode shell integration manually by (The method currently I am using):
```bash
# Inside ~/.bashrc
if [[ -n "${VSCODE_IPC_HOOK_CLI}" ]]; then
. "$(~/.vscode-server/cli/servers/*/server/bin/remote-cli/code --locate-shell-integration-path bash)"
fi
```
According to <https://github.com/tmux/tmux/wiki/FAQ>
> What is the passthrough escape sequence and how do I use it?
tmux takes care not to send escape sequences to a terminal that it isn't going to understand because it can't predict how it will react.
However, it can be forced to pass an escape sequence through by wrapping it in a special form of the DCS sequence with the content prefixed by tmux;. Any \033 characters in the wrapped sequence must be doubled, for example:
\033Ptmux;\033\033]1337;SetProfile=NewProfileName\007\033\\
Will pass this iTerm2 special escape sequence \033]1337;SetProfile=NewProfileName\007 through to the terminal without tmux discarding it.
To make it works, the current <src/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh> needed to be updated
to add tmux specific escape sequence.
Example change:
Before:
```
__vsc_update_cwd ()
{
if [ "$__vsc_is_windows" = "1" ]; then
__vsc_cwd="$(cygpath -m "$PWD")";
else
__vsc_cwd="$PWD";
fi;
builtin printf '\e]633;P;Cwd=%s\a' "$(__vsc_escape_value "$__vsc_cwd")"
}
```
After:
```
__vsc_printf () {
if [ -n "$TMUX" ]; then
builtin local format_str
format_str="\033Ptmux;\033${1}\033\\"
shift 1
builtin printf "${format_str}" "$@"
else
builtin printf "$@"
fi
}
__vsc_update_cwd ()
{
if [ "$__vsc_is_windows" = "1" ]; then
__vsc_cwd="$(cygpath -m "$PWD")";
else
__vsc_cwd="$PWD";
fi;
__vsc_printf '\e]633;P;Cwd=%s\a' "$(__vsc_escape_value "$__vsc_cwd")"
}
```
I can submit a PR later, if such request to okay to accept. | help wanted,feature-request,terminal-shell-integration | low | Minor |
2,635,130,164 | transformers | save `training_args` in another file format | ### Feature request
switch saving training arguments from `bin` format to another file extension.
### Motivation
for security reasons `bin` format is deemed unsafe as marked by huggingface's new security system.

### Your contribution
I'll try submitting a pr, but if anyone wishes to submit their own feel free to do so | Feature request | low | Minor |
2,635,146,312 | kubernetes | pv is bound to pvc, but pvc is pending | ### What happened?
I have a pvc1: `tmp-pvc-1d9c5916-c994-42e5-8060-99d563b43e3a`, and bound to pv: `pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996`. I changed the claimRef of pv:`pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996` to pvc2:`f55bp1pq6y`, pv status is bound and bound to pvc2:`f55bp1pq6y`. But pvc2:`f55bp1pq6y` status is still pending, and pvc1:`tmp-pvc-1d9c5916-c994-42e5-8060-99d563b43e3a` status is lost.
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
cdi.kubevirt.io/clonePhase: Pending
cdi.kubevirt.io/cloneType: csi-clone
cdi.kubevirt.io/createdForDataVolume: 417bdae0-3eb3-4be1-a438-fcf972d7a0ff
cdi.kubevirt.io/dataSourceNamespace: default
cdi.kubevirt.io/storage.clone.token: eyJhbGciOiJQUzI1NiJ9.eyJleHAiOjE3MzA4MDQwOTksImlhdCI6MTczMDgwMzc5OSwiaXNzIjoiY2RpLWFwaXNlcnZlciIsIm5hbWUiOiJpbWctZjlqc242ZXMtY2VwaC1ibG9jayIsIm5hbWVzcGFjZSI6ImRlZmF1bHQiLCJuYmYiOjE3MzA4MDM3OTksIm9wZXJhdGlvbiI6IkNsb25lIiwicGFyYW1zIjp7InRhcmdldE5hbWUiOiJ2NzFnYmF1bXkzIiwidGFyZ2V0TmFtZXNwYWNlIjoid3l3LXRlc3QtZHYifSwicmVzb3VyY2UiOnsiZ3JvdXAiOiIiLCJyZXNvdXJjZSI6InBlcnNpc3RlbnR2b2x1bWVjbGFpbXMiLCJ2ZXJzaW9uIjoidjEifX0.SUaSFke9zXFcNDVLD1eM_3RrdbYl1qkmXROeshIbaP_ltrklDouOalJk7sROjGjTKDsUhdKqPfvjnNauM1JWxR1fGjzTuPi-GJVK8W9l8U9N6BTyze9swf4lsZMp0pY1GhFuqbPubfSvvUxvmQkg4Tc2GPGqCUYcm22w3N2zl5xBzGRi4Ugsr4vWdttu-ajTSlAb966LOL598AI5XUIDdOdL6wiu9QtHfR39amvRgxWpl_powtQlC1w8Jj0xta1I3BcGypi_KHhK6L36W7nJk6ko_-E0nxpNSy1bHzK_N8no0a3jYQJ1Sl0320aeeLvFwMB6BwIcCFttkLHjZ2opWA
cdi.kubevirt.io/storage.condition.running: "false"
cdi.kubevirt.io/storage.condition.running.message: Clone Pending
cdi.kubevirt.io/storage.condition.running.reason: Pending
cdi.kubevirt.io/storage.contentType: kubevirt
cdi.kubevirt.io/storage.pod.restarts: "0"
cdi.kubevirt.io/storage.populator.kind: VolumeCloneSource
cdi.kubevirt.io/storage.preallocation.requested: "false"
cdi.kubevirt.io/storage.usePopulator: "true"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cdi.kubevirt.io/v1beta1","kind":"DataVolume","metadata":{"annotations":{},"name":"v71gbaumy3","namespace":"wyw-test-dv"},"spec":{"pvc":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Gi"}},"storageClassName":"ceph-block","volumeMode":"Block"},"source":{"pvc":{"name":"img-f9jsn6es-ceph-block","namespace":"default"}}}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
creationTimestamp: "2024-11-05T10:50:07Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: containerized-data-importer
app.kubernetes.io/component: storage
app.kubernetes.io/managed-by: cdi-controller
cdi.kubevirt.io/OwnedByUID: e80fc9be-821f-452a-9b03-bb5bbadb5094
name: tmp-pvc-1d9c5916-c994-42e5-8060-99d563b43e3a
namespace: default
resourceVersion: "10612580"
uid: 11ee3bfa-a558-4464-9466-dc6d6fb6047c
spec:
accessModes:
- ReadWriteOnce
dataSource:
apiGroup: null
kind: PersistentVolumeClaim
name: img-f9jsn6es-ceph-block
dataSourceRef:
apiGroup: null
kind: PersistentVolumeClaim
name: img-f9jsn6es-ceph-block
resources:
requests:
storage: 50Gi
storageClassName: ceph-block
volumeMode: Block
volumeName: pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound
```
after updating pv claimRef to pvc2:`f55bp1pq6y`:
```
# kubectl get pvc | grep -i lost
tmp-pvc-1d9c5916-c994-42e5-8060-99d563b43e3a Lost pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996 0 ceph-block 153m
[root@rongqi-node01 batch-create]# kubectl get pv pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996 40Gi RWO Retain Bound wyw-test-dv/f55bp1pq6y ceph-block 158m
```
### What did you expect to happen?
after updating pv claimRef to pvc2:`f55bp1pq6y`, pvc2:`f55bp1pq6y` status is bound.
### How can we reproduce it (as minimally and precisely as possible)?
I encountered this problem when using the cdi project:
[https://github.com/kubevirt/containerized-data-importer/issues/3470](url)
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.6", GitCommit:"741c8db18a52787d734cbe4795f0b4ad860906d6", GitTreeState:"clean", BuildDate:"2023-09-13T09:21:34Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.6", GitCommit:"741c8db18a52787d734cbe4795f0b4ad860906d6", GitTreeState:"clean", BuildDate:"2023-09-13T09:14:09Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# centos 8
$ uname -a
5.10.0-136.12.0.86.4.hl202.x86_64
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
# kubectl exec -it -n rook-ceph rook-ceph-tools-5877f9f669-n8nfb -- ceph --version
ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)
</details>
| kind/bug,sig/storage,needs-triage | low | Minor |
2,635,192,263 | rust | Regression in performance (poor codegen) between 1.81 and 1.82 | I've observed a fairly nasty performance regression when switching to 1.82 related to compression performance that appears to boil down to some very weird/suboptimal machine code being generated.
Delving into things, it looks like what was previously reasonable generated code is now turning into... well, something way less reasonable, and this results in executing a vastly higher number of instructions for the same task, in turn slowing everything down. The MVCE (mentioned below) demonstrates this behavior across at least two instruction sets (x86-64 and ARM64) and a number of different microarchitectures (AMD Zen2, Zen4, and Apple M2) and reliably shows a performance regression.
Some other folks who were helping me debug this will also be posting some more information, but the biggest initial suspect looks to be the upgrade to LLVM 19 in 1.82.
### Code
The MVCE is here: https://github.com/tobz/miniz-oxide-slowdown-repro.
It's a simple program that generates a deterministic input corpus, compresses it multiple times (using `flate2`/`miniz_oxide`), and then exits. Efforts were made to try and ensure things are deterministic and mostly representative of the observed change between Rust 1.81 and 1.82.
On my environment, this all boils down to each compress operation taking ~750ms when compiled on 1.81.0 or earlier, jumping to ~1.25s per operation on 1.82.0 and later, representing a ~60% slowdown.
### Version it worked on
This "worked" (ran well) on 1.81.0, 1.80.0, and 1.79.0 (as far back as I checked).
### Version with regression
<!--
Provide the version you are using that has the regression.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
``` | A-LLVM,I-slow,P-medium,T-compiler,C-bug,regression-untriaged | low | Critical |
2,635,195,653 | tensorflow | Current LiteRT Android dependencies in documentation look broken | I think after the recent TensorflowLite rename to LiteRT some pages in documentation where renamed incorrectly and are currently very confusing.
For a particular example, see this: https://ai.google.dev/edge/litert/android/gpu
- The docs say to add `com.google.ai.edge.litert:litert-gpu` and `com.google.ai.edge.litert:litert-gpu-api` with versions `2.X.Y`, which do not exist (current latest version is `1.0.1`), to a toml version catalog.
- In the next paragraph it also switches into using gradle files instead of toml to declare other dependencies, which I found somewhat confusing.
- Later, in [standalone setup](https://ai.google.dev/edge/litert/android/gpu#add_project_dependencies_2), it says to include `com.google.ai.edge.litert:litert-gpu-delegate-plugin` dependency, which does not exist and also follows with a code snipped supposedly showing how to include it, but it shows other dependencies.
- [Other places](https://ai.google.dev/edge/litert/android/metadata/codegen#acceleration) too include dependencies with incorrect versions, like `com.google.ai.edge.litert:litert-gpu:2.3.0`
I just happend to start working with LiteRT for Android right now and found it very difficult to distinguish which parts of documentation are outdated and which aren't. | stat:awaiting tensorflower,type:support,comp:lite,TFLiteConverter,TFLiteGpuDelegate | medium | Critical |
2,635,204,365 | transformers | redirect logging output to `stdout` instead of `stderr` | ### Feature request
Redirect logging output to `stdout` instead of `stderr`. Specifically, add argument `stream=sys.stdout` at: https://github.com/huggingface/transformers/blob/893ad04fad145904ccb71e4e858e4134c32226b6/src/transformers/utils/logging.py#L88.
### Motivation
It is a common practice to redirect logging output to `stdout` in deep learning frameworks.
For example: Detectron2: https://github.com/facebookresearch/detectron2/blob/8d85329aed8506ea3672e3e208971345973ea761/detectron2/utils/logger.py#L84
fairseq: https://github.com/facebookresearch/fairseq/blob/ecbf110e1eb43861214b05fa001eff584954f65a/fairseq_cli/train.py#L22
Deepspeed: https://github.com/microsoft/DeepSpeed/blob/2b41d6212c160a3645691b77b210ba7dd957c23f/deepspeed/utils/logging.py#L69.
Here is my analysis. Traditionally, `stdout` is used for output of the program and `stderr` is used for warning/debugging. That's why the default stream of `logging` is `stderr`. However, the output of deep learning frameworks consists of losses, eval results and checkpoints. It's a common practice to use `logger.info()` to display this information. Therefore, it would be more appropriate to redirect these outputs to `stdout` since they are part of the program's normal output.
### Your contribution
I can submit a PR if this request is confirmed. | Feature request | low | Critical |
2,635,235,632 | vscode | Emmet wrap with abbreviation reindents content when used with `pre` |
Type: <b>Bug</b>
1. Create a HTML file.
2. Create a deeply nested context (e.g. `div>div>div>code`)
3. Write a piece of code inside `<code>`, with reset indentation.
4. Select the piece of code and use the "Emmet: wrap with Abbreviation" command.
5. Enter `pre` as your abbreviation
Exprected behaviour: content inside `pre` maintains reset indentation.
Actual behaviour: content gets reindented.
VS Code version: Code 1.82.1 (6509174151d557a81c9d0b5f8a5a1e9274db5585, 2023-09-08T08:41:36.199Z)
OS version: Linux x64 5.15.0-60-generic
Modes:
Remote OS version: Linux x64 6.10.11+bpo-amd64
<details><summary>Extensions (8)</summary>
Extension|Author (truncated)|Version
---|---|---
remote-ssh|ms-|0.106.5
remote-ssh-edit|ms-|0.87.0
remote-explorer|ms-|0.4.1
code-runner|for|0.12.2
haskell|has|2.4.4
elastic-tabstops-mono|isr|1.1.0
language-haskell|jus|3.6.0
vscode-language-pack-pl|MS-|1.95.2024103009
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
nativeloc2:31134642
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
```
</details>
<!-- generated by issue reporter --> | bug,emmet,confirmed | low | Critical |
2,635,286,839 | PowerToys | Unable to locate 'Paste as file' output .txt file or folder | ### Microsoft PowerToys version
0.86.00
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
Typed text into Notepad
Copied text using Ctrl-C
activated advanced Pasting shift-win-V
clicked 'Paste as txt file'
Control returned to user
Searched Explorer from root of file system for 'Date modified = today' an string =*.txt
unable to find any file output
Repeated search from cmd batch window with command 'dir /s /w *.txt' >>searchfortxtfile.log
unable to find any entry in the output log file
### โ๏ธ Expected Behavior
to find the file/folder
### โ Actual Behavior
system closed the Advanced Paste window
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Minor |
2,635,301,362 | PowerToys | Revert Change to Navigation Tree | ### Description of the new feature / enhancement
PR #35559 added a collapsible navigation tree. This makes it harder to find features since they are hidden behind collapsed categories by default. The user has to guess which category a feature will be in and click around until they find it. Please either revert this or make it a feature that can be turned off. Consider adding a search bar instead for quickly finding features.
### Scenario when this would be used?
When changing settings
### Supporting information
_No response_ | Needs-Triage | low | Major |
2,635,326,396 | excalidraw | [feat] Remove background of image uploaded | ### Problem Statement:
Isn't it be lovely to add some feature that will remove background of image, Most of time during class mentoring, I feel that it would be good to have some option that will upload as well as remove image's background at the same time.
### How to do so? (Just an Idea)
This is just an idea, If you think it will work, go ahead and try.
- Well, we can use different approaches to do so. Like we can apply some function that removes background of image and same time prepare it to upload.
- Use remove.bg's API, this will perform well and then we can add image into canvas. | enhancement | low | Minor |
2,635,340,235 | opencv | Linking opencv ios to my project error | ### System Information
OpenCV: 4.10.0
OS: macOS Version 14.6.1
Complier: Xcode 16.1
### Detailed description
my image_processor.cpp include <opencv2/opencv.hpp>, and I want to build my processor library to be shared to iOS project.
I have run
python3 platforms/ios/build_framework.py ios --iphoneos_archs arm64 --iphonesimulator_archs x86_64 > build_log.txt
stdout:
[stdout.txt](https://github.com/user-attachments/files/17632092/stdout.txt)
build_log:
[build_log.txt](https://github.com/user-attachments/files/17632097/build_log.txt)
I write my project CMakeLists.txt like this:
set(OpenCV_DIR "${CMAKE_SOURCE_DIR}/opencv/ios/build/build-arm64-iphoneos")
set(OpenCV_FOUND TRUE)
find_package(OpenCV REQUIRED)
find_package(Iconv)
include_directories( ${OpenCV_INCLUDE_DIRS} )
message(STATUS "OpenCV libraries: ${OpenCV_LIBS}")
target_include_directories(preprocessor
PUBLIC
${CMAKE_SOURCE_DIR}
${OpenCV_INCLUDE_DIRS}
# ${CMAKE_SOURCE_DIR}/third_party/opencv/ios
)
target_link_libraries(preprocessor
PRIVATE
${OpenCV_LIBS}
)
Then i tried to use cmake to cross-build for ios target by:
cmake .. -G Xcode \
-DCMAKE_TOOLCHAIN_FILE=../toolchain/ios.toolchain.cmake \
-DPLATFORM=OS64 \
-DBUILD_SHARED_LIBS=1
cmake --build . --config Release
it reported error:
[cmake_build.txt](https://github.com/user-attachments/files/17632559/cmake_build.txt)
### Steps to reproduce
as my build instructions.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | question (invalid tracker) | low | Critical |
2,635,375,713 | ollama | Support partial loads of LLaMA 3.2 Vision 11b on 6G GPUs | ### What is the issue?
**Description:**
I encountered an issue where the **LLaMA 3.2 Vision 11b** model loads entirely in CPU RAM, without utilizing the GPU memory as expected. The issue occurs on my Windows-based laptop with 6GB VRAM, where models that exceed GPU memory capacity should load the rest into system RAM while still leveraging the GPU.
**Steps to Reproduce:**
1. Run **LLaMA 3.2 Vision 11b** with `ollama` on a system with limited VRAM (6 GB in my case).
2. Check the memory allocation using the `ollama ps` command.
**Expected Behavior:**
When running models larger than available VRAM, the model should partially load into VRAM and utilize system RAM for the remainder. This behavior works as intended for other models (e.g., **Llama 3.1**), which utilize the GPU and offload excess data to RAM.
**Actual Behavior:**
When running **Llama 3.2 Vision**, the entire model loads into the CPU RAM, as shown in the output of the `ollama ps` command. Additionally, the Task Manager indicates no significant GPU or VRAM usage, confirming that the model is not utilizing the GPU at all.
**Laptop Specifications:**
- **CPU**: AMD Ryzen 9 7940HS
- **RAM**: 16 GB
- **GPU**: NVIDIA RTX 4050 Mobile 6 GB VRAM
- **Ollama Version**: Pre-release 0.4.0-rc8
**Supporting Evidence:**
1. Screenshot of `ollama ps` showing **LLaMA 3.1** partially loading into VRAM (expected behavior):

2. Screenshot of `ollama ps` showing **LLaMA 3.2 Vision 11b** loaded fully into CPU RAM:

**Further Testing**:
On my **desktop** with higher VRAM (24GB):
**Specs**:
- **Processor**: Ryzen 7 7800X3D
- **Memory**: 64 GB RAM
- **GPU**: NVIDIA RTX 4090 24GB VRAM
- **Ollama Version**: Pre-release 0.4.0-rc8
Running the **LLaMA 3.2 Vision 11b** model on the desktop:
- The model loaded entirely in the GPU VRAM as expected.
- Screenshot of `ollama ps` for this case:

Running the **LLaMA 3.2 Vision 90b** model on the desktop (which exceeds 24GB VRAM):
- The model loaded partially into GPU and partially into CPU RAM, which is correct.
- Screenshot of `ollama ps` for this case:

**Note**: Both machines are running Windows, and GPU drivers are up to date.
**Conclusion:**
The behavior seems specific to running the **LLaMA 3.2 Vision 11b** model on systems with VRAM insufficient to load the entire model, where the expected split between VRAM and RAM doesn't occur.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.0-rc8 | feature request | medium | Major |
2,635,383,436 | kubernetes | ๐ Drop x/exp from k/k dependencies | - We started the process in `cilium/ebpf` here : https://github.com/cilium/ebpf/issues/1095
- A PR has landed here : https://github.com/cilium/ebpf/pull/1557
- Wait for a new [new release](https://github.com/cilium/ebpf/releases) of `cilium/ebpf`
- Update `opencontainers/runc` to new version of cilium/ebpf, similar to this [PR](https://github.com/opencontainers/runc/pull/4397)
- Wait for a new release/tag of `opencontainers/runc`
- Update k/k | sig/architecture,area/code-organization,needs-triage | low | Minor |
2,635,391,935 | kubernetes | ๐ Update containerd api/errdefs/ttrpc dependencies in k/k | Aborted attempt is in https://github.com/kubernetes/kubernetes/pull/128557 When 1.33 opens up,
- Update `google/cadvisor` to newer versions of these libraries [here](https://github.com/google/cadvisor/blob/master/go.mod#L11-L13).
- Ask for a new tag/release of `google/cadvisor` and then update k/k to the newer library version.
/sig node
/area code-organization | sig/node,area/code-organization,needs-triage | low | Minor |
2,635,437,520 | excalidraw | Elbow arrow endpoint drag with Ctrl held does not snap but tries to select | 
| good first issue | low | Major |
2,635,471,331 | ollama | Add support for function call (response back) (message.role=tool) | # Add support for function call (response back)
1. Currently there's no support for sending back the function call result to the model using the `role=tool` messages.
2. Using the native API (not openai), function tool calls don't have an identifier associated `tool_call_id`, this is present in the `openai` API, and is important to be available on both APIs.
> [!IMPORTANT]
> This ID is very when providing the result back to the model (in a chat history where the same function was invoked multiple times in a chat history with different results) for the model to reason about.

| feature request,api | low | Minor |
2,635,500,606 | vscode | Normal quick fix suggestions are not provided on new file, only AI ones | Steps to Reproduce:
1. I'm doing this in `vscode-copilot`
2. Create a new file
3. See it complains about the header
4. `CMD + .` only AI suggestions
5. Press space
6. `CMD + .` now you get non AI fixes


| bug,editor-code-actions | low | Minor |
2,635,501,730 | godot | Incorrect fallback to Direct3D 12 on Vulkan-unsupported device | ### Tested versions
- Reproducible in v4.3.stable.official.77dcf97d8 and v4.4 dev 3
### System information
Windows, Intel 4600 graphics
### Issue description
When opening an existing project that is set to use the Forward+ renderer, these errors occur on this device:
```
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
OpenGL API 3.3.0 - Build 20.19.15.5126 - Compatibility - Using Device: Intel - Intel(R) HD Graphics 4600
Project is missing: D:/CURSOS/UTP/ENDLESS/GAME MAKING/path-to-the-throne/godot/project.godot
Editing project: D:/CURSOS/UTP/ENDLESS/Game Maiking/path-to-the-throne/godot
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
WARNING: Your video card drivers seem not to support Vulkan, switching to Direct3D 12.
at: DisplayServerWindows (platform/windows/display_server_windows.cpp:5985)
D3D12 11_0 - Forward+ - Using Device #0: Intel - Intel(R) HD Graphics 4600
WARNING: PSO caching is not implemented yet in the Direct3D 12 driver.
at: pipeline_cache_create (drivers/d3d12/rendering_device_driver_d3d12.cpp:4906)
ERROR:ERROR: Shader translation at stage Compute failed.
Shader translation at stage Compute failed.
at: ERROR:(drivers/d3d12/rendering_device_driver_d3d12.cpp:3289)
```
After this, several hundred more gfx-related errors appear in the logs, with the Godot editor window fully black. After a few seconds the window goes away with this crash:
```
================================================================
CrashHandlerException: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] error(-1): no debug info in PE/COFF executable
[2] error(-1): no debug info in PE/COFF executable
[3] error(-1): no debug info in PE/COFF executable
[4] error(-1): no debug info in PE/COFF executable
[5] error(-1): no debug info in PE/COFF executable
[6] error(-1): no debug info in PE/COFF executable
[7] error(-1): no debug info in PE/COFF executable
[8] error(-1): no debug info in PE/COFF executable
[9] error(-1): no debug info in PE/COFF executable
[10] error(-1): no debug info in PE/COFF executable
[11] error(-1): no debug info in PE/COFF executable
[12] error(-1): no debug info in PE/COFF executable
[13] error(-1): no debug info in PE/COFF executable
[14] error(-1): no debug info in PE/COFF executable
[15] error(-1): no debug info in PE/COFF executable
[16] error(-1): no debug info in PE/COFF executable
[17] error(-1): no debug info in PE/COFF executable
[18] error(-1): no debug info in PE/COFF executable
[19] error(-1): no debug info in PE/COFF executable
[20] error(-1): no debug info in PE/COFF executable
-- END OF BACKTRACE --
================================================================
```
Intel 4600 does not support Vulkan nor Direct3D 12 on Windows.
The same thing happens in Godot 4.4 dev 3. (Sorry I did not capture logs from 4.4)
**However, Godot 4.4 will not allow me to create a new project with Forward+ because it detects the lack of gfx support**. That's a great addition!
Can something similar be applied to opening of existing projects? In this case, I would expect Godot to either give me a friendly error message saying that it can't open the project because of incompatible renderer, or for it to automatically fall back to Compatibility renderer (perhaps with some notification of this).
I see some type of fallback has been implemented via https://github.com/godotengine/godot-proposals/issues/8006 but it seems not to be working in this case.
### Steps to reproduce
1. Open Godot project manager
2. Open any existing project that uses Forward+ renderer | bug,platform:windows,topic:rendering | low | Critical |
2,635,546,368 | godot | Window decorations don't load until interacted with sometimes. | ### Tested versions
first seen in Godot 4.3.1: Godot 4.3.1.rc.custom_build[6699ae789]
### System information
Godot v4.3.1.rc (6699ae789) - Fedora Linux 41 (KDE Plasma) - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 XT (RADV NAVI31) - AMD Ryzen 9 7950X 16-Core Processor (32 Threads)
### Issue description
when compiling the editor with x11="no" upon loading a project the window decorations don't load until directly interacted with, and even then this appears to be random when they'll load or not.
### Steps to reproduce
Compile a custom editor with X11 removed with the specified commit, load a project and try to look for the window control buttons (like minimize and whatnot).
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:gui | low | Minor |
2,635,590,671 | pytorch | Bug with torch.compile + fwAD | ### ๐ Describe the bug
Hello,
I tried using the beta forward-mode automatic differentiation, but I ran into an issue when trying to compile my forward pass. I wonder if it was an error on my part or if it was an actual bug in the PyTorch code. Here is a minimal example that generates the error.
```
import torch
import torch.nn as nn
import torch.autograd.forward_ad as fwAD
device = "cuda:0"
@torch.compile()
def step(model, x, device):
x = x.to(device)
tangent = torch.zeros_like(x, device=device)
with fwAD.dual_level():
dual_input = fwAD.make_dual(x, tangent)
dual_output = model(dual_input)
return None
lamb = 0.001 # Regularization parameter
x = torch.randn(2,3)
model = nn.Sequential(nn.Linear(3,3),
nn.BatchNorm1d(3),
nn.Linear(3,1)).to(device)
model.train()
step(model, x, device)
```
I get an error like such:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function batch_norm at 0x788edb8ac9a0>(*(FakeTensor(..., device='cuda:0', size=(2, 3), grad_fn=<AddmmBackward0>,
tangent=FakeTensor(..., device='cuda:0', size=(2, 3), grad_fn=<AddBackward0>)), FakeTensor(..., device='cuda:0', size=(3,)), FakeTensor(..., device='cuda:0', size=(3,)), FakeTensor(..., device='cuda:0', size=(3,), requires_grad=True), FakeTensor(..., device='cuda:0', size=(3,), requires_grad=True), True, 0.1, 1e-05), **{}):
InferenceMode::is_enabled() && self.is_inference() INTERNAL ASSERT FAILED at "../aten/src/ATen/native/VariableMethodStubs.cpp":66, please report a bug to PyTorch. Expected this method to only be reached in inference mode and when all the inputs are inference tensors. You should NOT call this method directly as native::_fw_primal. Please use the dispatcher, i.e., at::_fw_primal. Please file an issue if you come across this error otherwise.
from user code:
File "/home/enzo/Documents/git/LieEquiv/minimal_example.py", line 13, in step
dual_output = model(dual_input)
File "/home/enzo/mambaforge/envs/up_to_date/lib/python3.12/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
File "/home/enzo/mambaforge/envs/up_to_date/lib/python3.12/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward
return F.batch_norm(
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.5 | packaged by conda-forge | (main, Aug 8 2024, 18:36:51) [GCC 12.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opรฉratoire(s) des processeurs : 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Boutisme : Little Endian
Processeur(s) : 32
Liste de processeur(s) en ligne : 0-31
Identifiant constructeur : GenuineIntel
Nom de modรจle : 13th Gen Intel(R) Core(TM) i9-13900K
Famille de processeur : 6
Modรจle : 183
Thread(s) par cลur : 2
Cลur(s) par socket : 24
Socket(s) : 1
Rรฉvision : 1
Vitesse maximale du processeur en MHz : 5800,0000
Vitesse minimale du processeur en MHz : 800,0000
BogoMIPS : 5990.40
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualisation : VT-x
Cache L1d : 896 KiB (24 instances)
Cache L1i : 1,3 MiB (24 instances)
Cache L2 : 32 MiB (12 instances)
Cache L3 : 36 MiB (1 instance)
Nลud(s) NUMA : 1
Nลud NUMA 0 de processeur(s) : 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.4.1
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.20.1
[pip3] triton==2.0.0
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.0 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.68 0 nvidia
[conda] cuda-runtime 12.4.0 0 nvidia
[conda] cudnn 9.3.0.75 h93bb076_0 conda-forge
[conda] libcublas 12.4.2.65 0 nvidia
[conda] libcufft 11.2.0.44 0 nvidia
[conda] libcurand 10.3.7.68 0 nvidia
[conda] libcusolver 11.6.0.99 0 nvidia
[conda] libcusparse 12.3.0.142 0 nvidia
[conda] libmagma 2.8.0 h0af6554_0 conda-forge
[conda] libmagma_sparse 2.8.0 h0af6554_0 conda-forge
[conda] libnvjitlink 12.4.99 0 nvidia
[conda] libtorch 2.4.1 cuda120_h1d34654_302 conda-forge
[conda] mkl 2023.2.0 h84fe81f_50496 conda-forge
[conda] mkl-devel 2023.2.0 ha770c72_50496 conda-forge
[conda] mkl-include 2023.2.0 h84fe81f_50496 conda-forge
[conda] nccl 2.23.4.1 h52f6c39_0 conda-forge
[conda] numpy 2.0.2 py312h58c1407_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-lightning 2.4.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.4.1 py312_cu124 pytorch
[conda] torchmetrics 1.4.2 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu | triaged,module: forward ad,oncall: pt2 | low | Critical |
2,635,712,253 | deno | NODE_MODULES not created in sub projects | Version: Deno 2.0.4
When creating a Mono repo (with ` "nodeModulesDir": "auto"` set in the `deno.json`), the `deno install` command doesn't create `node_modules` in the sub projects. There is only a single `NODE_MODULES` at the root. After a few discussions on Discord, this seems to be a bug.
I've created a basic mono repo [1] that follows the layout described in [2].
[1] https://github.com/irbull/deno_mono_repo
[2] https://docs.deno.com/runtime/fundamentals/workspaces/
Steps to reproduce:
1. Clone this repo (https://github.com/irbull/deno_mono_repo)
2. type `deno install`
3. Notice there is only a `NODE_MODULES` directory at the root (not in any of the sub projects)
| needs investigation,install | low | Critical |
2,635,718,931 | PowerToys | Expose Workspaces to PowerToys Run | ### Description of the new feature / enhancement
PowerToys Run displays Workspaces with names matching the search query. The default selection behavior would be to launch the Workspace.
### Scenario when this would be used?
Launching Workspaces would be faster versus launching the Workspaces Editor and clicking "Launch" each time. This is especially useful with multiple desktops and Workspaces for each desktop.
### Supporting information
_No response_ | Needs-Triage,Product-Workspaces | low | Minor |
2,635,743,888 | PowerToys | CTRL(left) key hijack | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I have set and used some Keyboard Remap Shortcuts long time ago. I used them constantly before with no issues.
After last update, my left CTRL key constantly stops working! This happens after some use (or after StandBy). The left CTRL is no longer functional, and this hijack happens to my laptops and external keyboard.
The hijack is released, if the Keyboard Manager is disabled and re-enabled again. The CTRL key then starts to work, but soon is hijacked again.
The right CTRL key works all the time.
BR, David
[PowerToysReport_2024-11-05-15-37-06.zip]
(https://github.com/user-attachments/files/17634587/PowerToysReport_2024-11-05-15-37-06.zip)
### โ๏ธ Expected Behavior
Normal use of left CTRL. With or without Keyboard Manager enabled.
### โ Actual Behavior
Left CTRL stops working, and starts again, if Keyboard manager is disabled/re-enabled.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Major |
2,635,768,906 | go | x/tools/gopls: pull diagnostic support, continued | Following up on #53275, there are a few more improvements I'd like to make to pull diagnostics so that they are comparable in performance to push diagnostics, at which point we can enable them by default:
- Rewrite the bottom-up graph traversal of analysis nodes: in the common case, all dependency information will be a cache hit (including memoized keys and encoded summaries in the file cache). Therefore, the bottom-up traversal of the full graph is a significant overhead when repeatedly querying diagnostics.
- Rewrite the bottom-up graph traversal to build package handles, for similar reasons.
- Refactor the fact decoder to operate on shallow fact encoding, so that facts in transitive dependencies can be retrieved (and decoded) on demand. (This could have a huge impact on performance).
- Add support for go.mod and go.work diagnostics (requires refactoring the way we diagnose the workspace).
CC @adonovan | gopls,Tools | low | Major |
2,635,771,839 | PowerToys | [PowerToys Run] - VSCode Plugin - Add Extra Buttons | ### Description of the new feature / enhancement
like other items

### Scenario when this would be used?
to open vscode projects
**Buttons for**
- Open in File-Explorer
- Open in Terminal
### Supporting information
_No response_ | Idea-Enhancement,Resolution-Fix Committed,Run-Plugin | low | Minor |
2,635,772,109 | three.js | TSL: Avoid naming collision when using `label()` or `toVar()`. | ### Description
When using `label()` or `toVar()` to give variables a name, collisions can occur when the name is already reserved for internal entities like uniforms or varyings.
If you now redefine a variable on app level, a shader error occurs. In WGSL for example:
> Error while parsing WGSL: :40:2 error: redefinition of 'cameraNear'
### Solution
It would be nice if the node material could automatically detect such name collisions and modify the custom names e.g. with a suffix.
### Alternatives
Leave it as it is and ask developers to pick unique names.
### Additional context
#29810 | TSL | low | Critical |
2,635,777,549 | rust | LUB coercions works for a function but fails for a closure | I tried the following code ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=97bdc0c0bce43681addb09a41b9fa6ce)), following the [examples](https://doc.rust-lang.org/reference/type-coercions.html#examples) for LUB coercion in the Reference, chapter 10.7.
The closure does not compile
```rust
let clo = || {
if true {
let x: *mut i32 = &mut 1;
x
} else if false {
let x: &i32 = &1;
x
} else {
let x: &mut i32 = &mut 1;
x
}
};
let _: *const i32 = clo();
```
with the error:
```
error[E0308]: `if` and `else` have incompatible types
```
A similar function does compile:
```rust
fn f() -> *const i32 {
if true {
let x: *mut i32 = &mut 1;
x
} else if false {
let x: &i32 = &1;
x
} else {
let x: &mut i32 = &mut 1;
x
}
}
```
According to the Reference, I expected that either both the closure and the function compiles or neither of them. | A-diagnostics,A-closures,T-compiler,C-bug,A-coercions,D-confusing | low | Critical |
2,635,827,387 | rust | compiletest seems to be generating its own macro backtrace diagnostics? | What on earth is going on here
https://github.com/rust-lang/rust/blob/096277e989d6de11c3077472fc05778e261e7b8e/src/tools/compiletest/src/json.rs#L305-L321
@BoxyUwU found out that compiletest seemingly emits diagnostics that are not in rustc's diagnostics. Like
```
error[E0425]: cannot find value `ident` in this scope
--> $DIR/const_arg_trivial_macro_expansion-1.rs:11:9
|
LL | ident
| ^^^^^ not found in this scope
...
LL | fn array_0() -> [(); unbraced_unbraced_ident!()] { loop {} }
| -------------------------- in this macro invocation
|
= note: this error originates in the macro `unbraced_ident` which comes from the expansion of the macro `unbraced_unbraced_ident` (in Nightly builds, run with -Z macro-backtrace for more info)
```

like, compiletest is generating its own macro backtrace diagnostics or something funny like that? | T-bootstrap,C-bug,A-compiletest,E-needs-investigation | low | Critical |
2,635,839,084 | pytorch | [RFC] PyTorch - PyPi PEP-759 proposal (wheel-next) | ### ๐ The feature, motivation and pitch
I would like to call for feedback about the PyPi PEP-759 proposal. This PEP proposes a unique approach to safely hosting external wheels, while keeping PyPI as the source of truth for project and release metadata. The PEP proposes a new uploadable file format, easily derived from .whl files, which includes only metadata to point to an external wheel hosting service, tied to organizational accounts (download.pytorch.org in our case). We are considering participating in this PEP initiative, and would like to get feedback from PyTorch Developers.
Potentially this PEP can solve some of the issues we are having during PyTorch releases :
- PyTorch PyPI channel size is constantly growing due to the large wheel size of PyTorch binary. Currently around ~900MB for 2.5 release. Implementing this approach will allow us to better control our channel size.
- Allow us to build a better test/staging PyPI channel to test binaries before publishing to PyPI. Currently we use [download.pytorch.org/whl/test](http://download.pytorch.org/whl/test) to test our wheels.
- Potentially give us the ability to host binaries with different CUDA versions on PyPI (Currently we only host CUDA 12.4 for Linux).
- Provide better user experience by eliminating need for specifying the index-url during install ``--index-url https://download.pytorch.org/whl/cpu``
Some possible concerns of this PEP:
- More CDN traffic on download.pytorch.org, hence higher cost
- Will this work well with other tools like Poetry ?
- Any potential security risks ?
Link to PEP-0759: https://peps.python.org/pep-0759/
Discuss Thread: https://discuss.python.org/t/pep-759-external-wheel-hosting/66458
cc @seemethere @malfet @osalpekar @albanD @ptrblck @ezyang @aterrel @warsawnv @DEKHTIARJonathan @gottbrath @rgommers
| module: binaries,triaged | low | Major |
2,635,867,580 | langchain | Local LLM doesnt stop after encountering the STOP token | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Create the agent using create_react_agent instead of initialize_agent
from langchain.agents import AgentExecutor, create_react_agent
prompt = hub.pull("hwchase17/react")
llm_withStop = llm.bind(stop=["\nObservation"])
agent = create_react_agent(
llm=llm_withStop,
tools=all_tools,
prompt=prompt,
)
agent.middle[1].kwargs = {'stop': ['Question:']}
# Create the agent executor properly
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=all_tools,
verbose=True,
handle_parsing_errors=True,
max_iterations=1
)
```
I have added Question as a stop criteria but still it starts producing Question multiple times and keeps going
<img width="1182" alt="image" src="https://github.com/user-attachments/assets/87c4ce78-fc7d-4281-9750-f761beb5de3d">
This is my model
```python
model_name = "microsoft/Phi-3-mini-4k-instruct"
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
# chat_model = pipeline("text-generation", model=model, tokenizer=tokenizer) # Set device to 0 for GPU
chat_model = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length=7000,
temperature=0.1,
do_sample=True,
)
llm = HuggingFacePipeline(pipeline=chat_model)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to make the LLM stop producing after it has produced the final output in a react chain with custom tools. It doesnt stop after Final output or the binded Stop token. Only stops after num words are over.
### System Info
absl-py==1.4.0
accelerate==0.34.2
aiohappyeyeballs==2.4.3
aiohttp==3.10.10
aiosignal==1.3.1
alabaster==0.7.16
albucore==0.0.19
albumentations==1.4.20
altair==4.2.2
annotated-types==0.7.0
anyio==3.7.1
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
array_record==0.5.1
arviz==0.20.0
astropy==6.1.4
astropy-iers-data==0.2024.10.28.0.34.7
astunparse==1.6.3
async-timeout==4.0.3
atpublic==4.1.0
attrs==24.2.0
audioread==3.0.1
autograd==1.7.0
babel==2.16.0
backcall==0.2.0
beautifulsoup4==4.12.3
bigframes==1.25.0
bigquery-magics==0.4.0
bleach==6.2.0
blinker==1.4
blis==0.7.11
blosc2==2.0.0
bokeh==3.4.3
Bottleneck==1.4.2
bqplot==0.12.43
branca==0.8.0
CacheControl==0.14.0
cachetools==5.5.0
catalogue==2.0.10
certifi==2024.8.30
cffi==1.17.1
chardet==5.2.0
charset-normalizer==3.4.0
chex==0.1.87
clarabel==0.9.0
click==8.1.7
cloudpathlib==0.20.0
cloudpickle==3.1.0
cmake==3.30.5
cmdstanpy==1.2.4
colorcet==3.1.0
colorlover==0.3.0
colour==0.1.5
community==1.0.0b1
confection==0.1.5
cons==0.4.6
contourpy==1.3.0
cryptography==43.0.3
cuda-python==12.2.1
cudf-cu12 @ https://pypi.nvidia.com/cudf-cu12/cudf_cu12-24.10.1-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
cufflinks==0.17.3
cupy-cuda12x==12.2.0
cvxopt==1.3.2
cvxpy==1.5.3
cycler==0.12.1
cymem==2.0.8
Cython==3.0.11
dask==2024.10.0
dataclasses-json==0.6.7
datascience==0.17.6
db-dtypes==1.3.0
dbus-python==1.2.18
debugpy==1.6.6
decorator==4.4.2
defusedxml==0.7.1
Deprecated==1.2.14
diffusers==0.30.3
distro==1.9.0
dlib==19.24.2
dm-tree==0.1.8
docker-pycreds==0.4.0
docstring_parser==0.16
docutils==0.18.1
dopamine_rl==4.0.9
duckdb==1.1.2
earthengine-api==1.2.0
easydict==1.13
ecos==2.0.14
editdistance==0.8.1
eerepr==0.0.4
einops==0.8.0
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl#sha256=86cc141f63942d4b2c5fcee06630fd6f904788d2f0ab005cce45aadb8fb73889
entrypoints==0.4
et_xmlfile==2.0.0
etils==1.10.0
etuples==0.3.9
eval_type_backport==0.2.0
exceptiongroup==1.2.2
fastai==2.7.18
fastcore==1.7.19
fastdownload==0.0.7
fastjsonschema==2.20.0
fastprogress==1.0.3
fastrlock==0.8.2
filelock==3.16.1
firebase-admin==6.5.0
Flask==2.2.5
flatbuffers==24.3.25
flax==0.8.5
folium==0.18.0
fonttools==4.54.1
frozendict==2.4.6
frozenlist==1.5.0
fsspec==2024.10.0
future==1.0.0
gast==0.6.0
gcsfs==2024.10.0
GDAL==3.6.4
gdown==5.2.0
geemap==0.35.0
gensim==4.3.3
geocoder==1.38.1
geographiclib==2.0
geopandas==1.0.1
geopy==2.4.1
gin-config==0.5.0
gitdb==4.0.11
GitPython==3.1.43
glob2==0.7
google==2.0.3
google-ai-generativelanguage==0.6.10
google-api-core==2.19.2
google-api-python-client==2.137.0
google-auth==2.27.0
google-auth-httplib2==0.2.0
google-auth-oauthlib==1.2.1
google-cloud-aiplatform==1.70.0
google-cloud-bigquery==3.25.0
google-cloud-bigquery-connection==1.15.5
google-cloud-bigquery-storage==2.27.0
google-cloud-bigtable==2.26.0
google-cloud-core==2.4.1
google-cloud-datastore==2.19.0
google-cloud-firestore==2.16.1
google-cloud-functions==1.16.5
google-cloud-iam==2.16.0
google-cloud-language==2.13.4
google-cloud-pubsub==2.25.0
google-cloud-resource-manager==1.13.0
google-cloud-storage==2.8.0
google-cloud-translate==3.15.5
google-colab @ file:///colabtools/dist/google_colab-1.0.0.tar.gz
google-crc32c==1.6.0
google-generativeai==0.8.3
google-pasta==0.2.0
google-resumable-media==2.7.2
google_search_results==2.4.2
googleapis-common-protos==1.65.0
googledrivedownloader==0.4
graphviz==0.20.3
greenlet==3.1.1
grpc-google-iam-v1==0.13.1
grpcio==1.64.1
grpcio-status==1.48.2
gspread==6.0.2
gspread-dataframe==3.3.1
gym==0.25.2
gym-notices==0.0.8
h11==0.14.0
h5netcdf==1.4.0
h5py==3.12.1
holidays==0.59
holoviews==1.19.1
html5lib==1.1
httpcore==1.0.6
httpimport==1.4.0
httplib2==0.22.0
httpx==0.27.2
httpx-sse==0.4.0
huggingface-hub==0.24.7
humanize==4.11.0
hyperopt==0.2.7
ibis-framework==9.2.0
idna==3.10
imageio==2.36.0
imageio-ffmpeg==0.5.1
imagesize==1.4.1
imbalanced-learn==0.12.4
imgaug==0.4.0
immutabledict==4.2.0
importlib_metadata==8.5.0
importlib_resources==6.4.5
imutils==0.5.4
inflect==7.4.0
iniconfig==2.0.0
intel-cmplr-lib-ur==2025.0.0
intel-openmp==2025.0.0
ipyevents==2.0.2
ipyfilechooser==0.6.0
ipykernel==5.5.6
ipyleaflet==0.19.2
ipyparallel==8.8.0
ipython==7.34.0
ipython-genutils==0.2.0
ipython-sql==0.5.0
ipytree==0.2.2
ipywidgets==7.7.1
itsdangerous==2.2.0
jax==0.4.33
jax-cuda12-pjrt==0.4.33
jax-cuda12-plugin==0.4.33
jaxlib==0.4.33
jeepney==0.7.1
jellyfish==1.1.0
jieba==0.42.1
Jinja2==3.1.4
jiter==0.6.1
joblib==1.4.2
jsonpatch==1.33
jsonpickle==3.3.0
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter-client==6.1.12
jupyter-console==6.1.0
jupyter-leaflet==0.19.2
jupyter-server==1.24.0
jupyter_core==5.7.2
jupyterlab_pygments==0.3.0
jupyterlab_widgets==3.0.13
kaggle==1.6.17
kagglehub==0.3.3
keras==3.4.1
keyring==23.5.0
kiwisolver==1.4.7
langchain==0.3.7
langchain-community==0.3.5
langchain-core==0.3.15
langchain-huggingface==0.1.2
langchain-text-splitters==0.3.0
langcodes==3.4.1
langsmith==0.1.137
language_data==1.2.0
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy_loader==0.4
libclang==18.1.1
libcudf-cu12 @ https://pypi.nvidia.com/libcudf-cu12/libcudf_cu12-24.10.1-py3-none-manylinux_2_28_x86_64.whl
librosa==0.10.2.post1
lightgbm==4.5.0
linkify-it-py==2.0.3
llvmlite==0.43.0
locket==1.0.0
logical-unification==0.4.6
lxml==5.3.0
marisa-trie==1.2.1
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
marshmallow==3.23.1
matplotlib==3.8.0
matplotlib-inline==0.1.7
matplotlib-venn==1.1.1
mdit-py-plugins==0.4.2
mdurl==0.1.2
miniKanren==1.0.3
missingno==0.5.2
mistune==3.0.2
mizani==0.13.0
mkl==2024.2.2
ml-dtypes==0.4.1
mlxtend==0.23.1
more-itertools==10.5.0
moviepy==1.0.3
mpmath==1.3.0
msgpack==1.1.0
multidict==6.1.0
multipledispatch==1.0.0
multitasking==0.0.11
murmurhash==1.0.10
music21==9.1.0
mypy-extensions==1.0.0
namex==0.0.8
natsort==8.4.0
nbclassic==1.1.0
nbclient==0.10.0
nbconvert==7.16.4
nbformat==5.10.4
nest-asyncio==1.6.0
networkx==3.4.2
nibabel==5.3.2
nltk==3.8.1
notebook==6.5.5
notebook_shim==0.2.4
numba==0.60.0
numexpr==2.10.1
numpy==1.26.4
nvidia-cublas-cu12==12.6.3.3
nvidia-cuda-cupti-cu12==12.6.80
nvidia-cuda-nvcc-cu12==12.6.77
nvidia-cuda-runtime-cu12==12.6.77
nvidia-cudnn-cu12==9.5.1.17
nvidia-cufft-cu12==11.3.0.4
nvidia-curand-cu12==10.3.7.77
nvidia-cusolver-cu12==11.7.1.2
nvidia-cusparse-cu12==12.5.4.2
nvidia-nccl-cu12==2.23.4
nvidia-nvjitlink-cu12==12.6.77
nvtx==0.2.10
nx-cugraph-cu12 @ https://pypi.nvidia.com/nx-cugraph-cu12/nx_cugraph_cu12-24.10.0-py3-none-any.whl
oauth2client==4.1.3
oauthlib==3.2.2
openai==1.52.2
opencv-contrib-python==4.10.0.84
opencv-python==4.10.0.84
opencv-python-headless==4.10.0.84
openpyxl==3.1.5
opentelemetry-api==1.16.0
opentelemetry-sdk==1.16.0
opentelemetry-semantic-conventions==0.37b0
opt_einsum==3.4.0
optax==0.2.3
optree==0.13.0
orbax-checkpoint==0.6.4
orjson==3.10.10
osqp==0.6.7.post3
packaging==24.1
pandas==2.2.2
pandas-datareader==0.10.0
pandas-gbq==0.24.0
pandas-stubs==2.2.2.240909
pandocfilters==1.5.1
panel==1.4.5
param==2.1.1
parso==0.8.4
parsy==2.1
partd==1.4.2
pathlib==1.0.1
patsy==0.5.6
peewee==3.17.7
peft==0.13.2
pexpect==4.9.0
pickleshare==0.7.5
pillow==10.4.0
platformdirs==4.3.6
plotly==5.24.1
plotnine==0.14.0
pluggy==1.5.0
polars==1.9.0
pooch==1.8.2
portpicker==1.5.2
preshed==3.0.9
prettytable==3.11.0
proglog==0.1.10
progressbar2==4.5.0
prometheus_client==0.21.0
promise==2.3
prompt_toolkit==3.0.48
propcache==0.2.0
prophet==1.1.6
proto-plus==1.25.0
protobuf==3.20.3
psutil==5.9.5
psycopg2==2.9.10
ptyprocess==0.7.0
py-cpuinfo==9.0.0
py4j==0.10.9.7
pyarrow==17.0.0
pyarrow-hotfix==0.6
pyasn1==0.6.1
pyasn1_modules==0.4.1
pycocotools==2.0.8
pycparser==2.22
pydantic==2.9.2
pydantic-settings==2.6.1
pydantic_core==2.23.4
pydata-google-auth==1.8.2
pydot==3.0.2
pydotplus==2.0.2
PyDrive==1.3.1
PyDrive2==1.20.0
pyerfa==2.0.1.4
pygame==2.6.1
pygit2==1.16.0
Pygments==2.18.0
PyGObject==3.42.1
PyJWT==2.9.0
pylibcudf-cu12 @ https://pypi.nvidia.com/pylibcudf-cu12/pylibcudf_cu12-24.10.1-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
pylibcugraph-cu12==24.10.0
pylibraft-cu12==24.10.0
pymc==5.17.0
pymystem3==0.2.0
pynvjitlink-cu12==0.4.0
pyogrio==0.10.0
PyOpenGL==3.1.7
pyOpenSSL==24.2.1
pyparsing==3.2.0
pyperclip==1.9.0
pyproj==3.7.0
pyshp==2.3.1
PySocks==1.7.1
pyspark==3.5.3
pytensor==2.25.5
pytest==7.4.4
python-apt==0.0.0
python-box==7.2.0
python-dateutil==2.8.2
python-dotenv==1.0.1
python-louvain==0.16
python-slugify==8.0.4
python-utils==3.9.0
pytz==2024.2
pyviz_comms==3.0.3
PyYAML==6.0.2
pyzmq==24.0.1
qdldl==0.1.7.post4
ratelim==0.1.6
referencing==0.35.1
regex==2024.9.11
requests==2.32.3
requests-oauthlib==1.3.1
requests-toolbelt==1.0.0
requirements-parser==0.9.0
rich==13.9.3
rmm-cu12==24.10.0
rpds-py==0.20.0
rpy2==3.4.2
rsa==4.9
safetensors==0.4.5
scikit-image==0.24.0
scikit-learn==1.5.2
scipy==1.13.1
scooby==0.10.0
scs==3.2.7
seaborn==0.13.2
SecretStorage==3.3.1
Send2Trash==1.8.3
sentence-transformers==3.2.1
sentencepiece==0.2.0
sentry-sdk==2.17.0
setproctitle==1.3.3
shap==0.46.0
shapely==2.0.6
shellingham==1.5.4
simple-parsing==0.1.6
six==1.16.0
sklearn-pandas==2.2.0
slicer==0.0.8
smart-open==7.0.5
smmap==5.0.1
sniffio==1.3.1
snowballstemmer==2.2.0
soundfile==0.12.1
soupsieve==2.6
soxr==0.5.0.post1
spacy==3.7.5
spacy-legacy==3.0.12
spacy-loggers==1.0.5
Sphinx==5.0.2
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
SQLAlchemy==2.0.35
sqlglot==25.1.0
sqlparse==0.5.1
srsly==2.4.8
stanio==0.5.1
statsmodels==0.14.4
StrEnum==0.4.15
stringzilla==3.10.6
sympy==1.13.1
tables==3.8.0
tabulate==0.9.0
tbb==2021.13.1
tcmlib==1.2.0
tenacity==9.0.0
tensorboard==2.17.0
tensorboard-data-server==0.7.2
tensorflow==2.17.0
tensorflow-datasets==4.9.6
tensorflow-hub==0.16.1
tensorflow-io-gcs-filesystem==0.37.1
tensorflow-metadata==1.16.1
tensorflow-probability==0.24.0
tensorstore==0.1.67
termcolor==2.5.0
terminado==0.18.1
text-unidecode==1.3
textblob==0.17.1
tf-slim==1.1.0
tf_keras==2.17.0
thinc==8.2.5
threadpoolctl==3.5.0
tifffile==2024.9.20
timm==1.0.11
tinycss2==1.4.0
tokenizers==0.19.1
toml==0.10.2
tomli==2.0.2
toolz==0.12.1
torch @ https://download.pytorch.org/whl/cu121_full/torch-2.5.0%2Bcu121-cp310-cp310-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/cu121_full/torchaudio-2.5.0%2Bcu121-cp310-cp310-linux_x86_64.whl
torchsummary==1.5.1
torchvision @ https://download.pytorch.org/whl/cu121_full/torchvision-0.20.0%2Bcu121-cp310-cp310-linux_x86_64.whl
tornado==6.3.3
tqdm==4.66.6
traitlets==5.7.1
traittypes==0.2.1
transformers==4.44.2
tweepy==4.14.0
typeguard==4.4.0
typer==0.12.5
types-pytz==2024.2.0.20241003
types-setuptools==75.2.0.20241025
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.2
tzlocal==5.2
uc-micro-py==1.0.3
umf==0.9.0
uritemplate==4.1.1
urllib3==2.2.3
vega-datasets==0.9.0
wadllib==1.3.6
wandb==0.18.5
wasabi==1.1.3
wcwidth==0.2.13
weasel==0.4.1
webcolors==24.8.0
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.0.6
widgetsnbextension==3.6.10
wordcloud==1.9.3
wrapt==1.16.0
xarray==2024.10.0
xarray-einstats==0.8.0
xgboost==2.1.2
xlrd==2.0.1
xyzservices==2024.9.0
yarl==1.17.0
yellowbrick==1.5
yfinance==0.2.48
zipp==3.20.2 | ๐ค:bug | low | Critical |
2,635,872,003 | tensorflow | Cannot Convert 51 to a shape - Movenet pose classification tutorial | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
No
### OS platform and distribution
Win11, colab notebook
### Mobile device
_No response_
### Python version
3.10.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
the notebook pose_classification.ipynb tutorial works fine until the following cell:
`# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)
layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer)
model = keras.Model(inputs, outputs)
model.summary()`
I've run this both locally and within colab and get the same results. Any advice or insight would be helpful.
### Standalone code to reproduce the issue
```shell
https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb#scrollTo=1Pte6b1bgWKv
```
### Relevant log output
```shell
ValueError Traceback (most recent call last)
<ipython-input-28-f905a19927b3> in <cell line: 2>()
1 # Define the model
----> 2 inputs = tf.keras.Input(shape=(51))
3 embedding = landmarks_to_embedding(inputs)
4
5 layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
2 frames
/usr/local/lib/python3.10/dist-packages/keras/src/backend/common/variables.py in standardize_shape(shape)
528 raise ValueError("Undefined shapes are not supported.")
529 if not hasattr(shape, "__iter__"):
--> 530 raise ValueError(f"Cannot convert '{shape}' to a shape.")
531 if config.backend() == "tensorflow":
532 if isinstance(shape, tf.TensorShape):
ValueError: Cannot convert '51' to a shape.
```
| stat:awaiting tensorflower,type:support,comp:lite,2.17 | low | Critical |
2,635,874,682 | flutter | iOS "Scribble" feature: cannot tap on text selection controls with Apple Pencil | ### Steps to reproduce
1. Add a `TextField` or a `CupertinoTextField` to the application (see sample code below).
2. Add some text using the [scribble](https://support.apple.com/guide/ipad/enter-text-with-scribble-ipad355ab2a7/ipados) feature with the Apple Pencil.
3. Select some of the text.
4. Now try to hit a button on the text selection controls (e.g. "copy") **using the Apple Pencil**.
### Expected results
As in other iOS apps, it should be possible to copy, cut, etc. by hitting the corresponding button with the **Apple Pencil**.
### Actual results
The button is not hit; instead, the input seems to be directed to the underlying canvas. (See attached video.)
If the buttons are tapped with the finger, though, they work just fine.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: ConstrainedBox(
constraints: const BoxConstraints(maxWidth: 450),
child: CupertinoTextField(
scribbleEnabled: true,
selectionControls: cupertinoTextSelectionControls,
),
),
),
),
);
}
}
```
(Note that I customized the `selectionControls` parameter because of #158177).
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/e7401272-047f-4581-92ab-bdf1d1dd3723
</details>
### Logs
<details open><summary>Logs</summary>
(not applicable)
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.4, on macOS 15.1 24B83 darwin-arm64, locale en-US)
โข Flutter version 3.24.4 on channel stable at /Users/redge/Shared/SDKs/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (12 days ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/redge/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16B40
โข CocoaPods version 1.16.1
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] IntelliJ IDEA Ultimate Edition (version 2022.3.1)
โข IntelliJ at /Applications/IntelliJ IDEA.app
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
[โ] VS Code (version 1.95.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.101.20241031
[โ] Connected device (5 available)
โข Daniels iPad Air (mobile) โข 00008112-000108C13E81A01E โข ios โข iOS 18.1 22B83
โข iPhone Mini Daniel (mobile) โข 00008110-001C410C2112401E โข ios โข iOS 18.1 22B83
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 15.1 24B83 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 15.1 24B83 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 130.0.6723.92
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| a: text input,platform-ios,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Major |
2,635,884,033 | go | cmd/go: add fips140 module selection mechanism | > [!IMPORTANT]
> Nov 20, 2024: The latest version of the proposal is [here](https://github.com/golang/go/issues/70200#issuecomment-2490017956).
_Update_, Nov 11 2024: An updated version is at https://github.com/golang/go/issues/70200#issuecomment-2468562595. The change is to stop presenting the choice as anything like a module, and instead to use a GOFIPS140 environment variable, analogous to GOOS, GOARCH, CGO\_ENABLED, and so on.
- - -
We are working toward getting Go crypto FIPS validated, to be able to remove BoringCrypto. The relevant code is going to be `crypto/internal/fips/...`. People are going to need to be able to select between a few different crypto/internal/fips trees when using any given Go distribution. We propose to treat the crypto/internal/fips tree as its own pseudo-module, with a new go build flag, `-fips140`, to choose the version.
In general, any build needs to choose between the actual latest source in $GOROOT/src/crypto/internal/fips and an earlier source snapshot stored in module zip form in $GOROOT/lib/fips140. (Earlier snapshots will have been through more validation processes and may be necessary for certain government-related use.)
The crypto/internal/fips module will use these version numbers:
- the version in the actual Go 1.X.Y source tree will be called v1.X.Y.
- snapshots taken ahead of Go 1.X.Y will be called v1.X.Y-fips.N for N starting at 1 and incrementing.
Note that the versions order correctly: v1.24.0-fips.1 < v1.24.0. The idea is that toward the end of the 1.24.0 cycle, a snapshot would be taken and sent to a lab for validation, so that by the time 1.24.0 is released, "v1.24.0-fips.1" will be an "in-process" FIPS module, meaning a lab has validated it but NIST has not yet signed off. (When NIST signs off, it becomes a "certified" module.)
The `-fips140` flag can take one of the following possible values:
- `-fips140=off` (the default) means to use the source in `$GOROOT/src/crypto/internal/fips` and set the default GODEBUG for compiled binaries to [`fips140=off`](https://go.dev/issue/70123). All other flag settings set the default GODEBUG to `fips140=on`.
- `-fips140=latest` means to use the source in `$GOROOT/src/crypto/internal/fips` (and default binaries to `fips140=on`, which I will stop saying).
- `-fips140=v1.X.Y-fips.N` means to use the snapshot in `$GOROOT/lib/fips140/v1.X.Y-fips.N.zip`.
- `-fips140=inprocess` means to use the version listed in `$GOROOT/lib/fips140/inprocess.txt` (it will be `v1.X.Y-fips.N` for some X, Y, N).
- `-fips140=certified` means to use the version listed in `$GOROOT/lib/fips140/certified.txt` (it too will be `v1.X.Y-fips.N` for some X, Y, N).
Note that the definitions of `inprocess` and `certified` are specific to the Go toolchain being used, to keep builds reproducible. (That is, NIST issuing a certification for a new version does not change the meaning of -fips140=certified in an older Go release.)
crypto/internal/fips will be like a module in the sense that:
- When you run go list, packages in crypto/internal/fips/... will be reported as coming from a module named crypto/internal/fips.
- crypto/internal/fips has a set of versions, module zips, and checksums.
- Earlier snapshots maintained in $GOROOT/lib/fips140 will use the module zip format.
- When using an earlier snapshot, the go command unpacks the module zip into the module cache and then uses that source in the build.
- Programs that use go list or x/tools/go/packages will see the module information for crypto/internal/fips and will see source coming from the module cache when using earlier snapshots.
- crypto/internal/fips will be listed in the `go version -m` output with its own version, in all builds using those packages. This will enable tools like govulncheck to report vulnerabilities accurately.
crypto/internal/fips will be special, or unlike a module, in these ways:
- The module zips will only ever be loaded from $GOROOT/lib/fips140, never from the module proxy.
- The checksums for the zips will only ever be loaded from $GOROOT/lib/fips140/fips140.sum, not from the checksum database.
- crypto/internal/fips will not be listed in go.mod or participate in module version selection; the `-fips140` build flag is the only way to choose the version.
- The name "crypto/internal/fips" does not start with a domain name element, which violates golang.org/x/mod/module.CheckPath ("missing dot in first path element").
- In -mod=vendor builds, fips code will come from the fips places ($GOROOT or else an unpacked zip), never from the vendor directory.
The specialness seems unavoidable. We could go all the way and make fips completely different and not look anything like a module, but it is necessary to be able to say what version of fips a given program uses, and modules are the vocabulary we use for talking about versions. So it makes sense to reuse that vocabulary. In general the special cases seem like they will not cause much trouble. In particular, programs that use x/tools/go/packages or go list to find information about packages in a build will see a Dir set to a directory in the module cache when the alternate fips locations are being used. Since they already handle seeing other packages in the module cache, they should keep working without any changes.
The only special case that might cause trouble is if clients of module.CheckPath try to check "crypto/internal/fips". If that turns out to be a problem, we can update module.CheckPath to allow crypto/internal/fips as a special case. In my prototype of this functionality, I skipped the calls to module.CheckPath in the few places where it mattered, avoiding any changes to x/mod.
I have a prototype of this code working. It needs some cleanup and tests, but the necessary changes are quite small:
```
src/cmd/go/internal/cfg/cfg.go | 1 +
src/cmd/go/internal/load/pkg.go | 5 +-
src/cmd/go/internal/modfetch/cache.go | 11 +-
src/cmd/go/internal/modfetch/codehost/git.go | 2 +
src/cmd/go/internal/modfetch/coderepo.go | 14 +-
src/cmd/go/internal/modfetch/fetch.go | 2 +-
src/cmd/go/internal/modfetch/fips.go | 239 +++++++++++++++++++++++++++
src/cmd/go/internal/modfetch/repo.go | 11 +-
src/cmd/go/internal/modfetch/sumdb.go | 4 +
src/cmd/go/internal/modload/build.go | 10 +-
src/cmd/go/internal/modload/import.go | 10 +-
src/cmd/go/internal/modload/init.go | 8 +
src/cmd/go/internal/modload/load.go | 2 +-
src/cmd/go/internal/modload/query.go | 2 +-
src/cmd/go/internal/work/build.go | 7 +-
15 files changed, 306 insertions(+), 22 deletions(-)
```
In summary, the proposal is to establish the crypto/internal/fips pseudo-module with the version scheme and properties described above, and to add the `-fips140` go build flag. (Note that โbuild flagsโ apply not just to `go build` but to most other go commands, including `go install`, `go test`, and `go list`.) | Proposal,Proposal-Accepted | high | Critical |
2,635,884,509 | vscode | API for native window handle | For Broker support in Microsoft Authentication (https://github.com/microsoft/vscode/issues/229431) we need the window handle for this API:
https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/brokering.md#window-parenting
So, I need to be able to access the window handle from Main alllll the way in an extension.
Here's the proposal:
```ts
export namespace env {
/**
* Retrieves the native window handle of the current window.
*
* @returns A promise that resolves to a Buffer containing the native window handle.
*/
export function getNativeWindowHandle(): Thenable<Buffer | undefined>;
}
```
| feature-request,api,on-testplan,api-proposal,microsoft-authentication | low | Major |
2,635,920,236 | vscode | Chat: Picked model is not preserved |
Type: <b>Bug</b>
Once Claude Sonnet is selected in the Pick Model option within Chat, it goes back to gpt4o after I restart my computer or reload my session.
VS Code version: Code - Insiders 1.96.0-insider (f87f8a56f3a30238076bee3db39c245bd69be264, 2024-11-05T05:04:15.310Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1145G7 @ 2.60GHz (8 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.71GB (5.46GB free)|
|Process Argv|--crash-reporter-id b05b88e5-8894-4031-ae34-fa034ebddea9|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (133)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-openapi|42C|4.29.2
zotenote|A-W|1.0.1
android-dev-ext|ade|1.4.0
aiprm-lang|AIP|0.0.2
Bookmarks|ale|13.5.0
openscad|Ant|1.2.2
spellright|ban|3.0.140
mermaid-markdown-syntax-highlighting|bpr|1.6.0
external-pdf|cha|1.2.0
doxdocgen|csc|1.4.0
vscode-markdownlint|Dav|0.53.0
vscode-eslint|dba|3.0.10
vscode-quick-select|dba|0.2.9
vscode-deno|den|3.42.0
gitlens|eam|14.6.1
EditorConfig|Edi|0.16.4
prettier-vscode|esb|10.1.0
figma-vscode-extension|fig|0.4.0
vscode-firefox-debug|fir|2.9.10
shell-format|fox|7.2.5
vscode-google-translate|fun|1.4.13
codespaces|Git|1.17.3
copilot|Git|1.243.1196
copilot-chat|Git|0.23.2024110501
remotehub|Git|0.64.0
vscode-github-actions|git|0.26.2
vscode-pull-request-github|Git|0.101.2024110504
cloudcode|goo|2.20.0
overleaf-workshop|iam|0.14.1
cslpreview|igo|0.2.2
path-autocomplete|ion|1.25.0
latex-workshop|Jam|10.5.6
lilypond-syntax|jea|0.1.1
scheme|jea|0.2.0
better-cpp-syntax|jef|1.17.2
commitlint|jos|2.6.0
language-julia|jul|1.127.2
google-search|kam|0.0.1
vscode-lua-format|Koi|1.3.8
lilypond-formatter|lhl|0.2.3
lilypond-pdf-preview|lhl|0.2.8
lilypond-snippets|lhl|0.1.1
vslilypond|lhl|1.7.3
language-matlab|Mat|1.2.6
git-graph|mhu|1.30.0
mongodb-vscode|mon|1.9.3
azure-dev|ms-|0.8.4
vscode-azureappservice|ms-|0.25.4
vscode-azurecontainerapps|ms-|0.6.1
vscode-azurefunctions|ms-|1.15.4
vscode-azureresourcegroups|ms-|0.8.3
vscode-azurestaticwebapps|ms-|0.12.2
vscode-azurestorage|ms-|0.16.1
vscode-azurevirtualmachines|ms-|0.6.6
vscode-cosmosdb|ms-|0.23.0
vscode-docker|ms-|1.29.3
vscode-edge-devtools|ms-|2.1.6
black-formatter|ms-|2024.5.12841012
debugpy|ms-|2024.13.2024103001
flake8|ms-|2023.13.12291011
isort|ms-|2023.13.12321012
python|ms-|2024.19.2024103101
vscode-pylance|ms-|2024.11.1
jupyter|ms-|2024.11.2024103101
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.8
vscode-jupyter-slideshow|ms-|0.1.5
remote-containers|ms-|0.390.0
remote-ssh|ms-|0.116.2024102915
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.81.8
vscode-remote-extensionpack|ms-|0.25.0
azure-account|ms-|0.12.0
azure-repos|ms-|0.40.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.23.0
cpptools-extension-pack|ms-|1.3.0
js-debug-nightly|ms-|2024.11.417
live-server|ms-|0.5.2024091601
remote-explorer|ms-|0.5.2024011009
remote-repositories|ms-|0.42.0
remote-server|ms-|1.6.2024011109
vscode-commander|ms-|0.2.0
vscode-copilot-data-analysis|ms-|0.2.2
vscode-copilot-vision|ms-|0.2.2024110422
vscode-github-issue-notebooks|ms-|0.0.130
vscode-node-azure-pack|ms-|1.2.0
vscode-selfhost-test-provider|ms-|0.3.25
vscode-serial-monitor|ms-|0.13.1
vscode-speech|ms-|0.12.1
vscode-speech-language-pack-en-ca|ms-|0.5.0
vscode-speech-language-pack-en-gb|ms-|0.5.0
vscode-speech-language-pack-ko-kr|ms-|0.5.0
vscode-websearchforcopilot|ms-|0.1.2024110502
vsliveshare|ms-|1.0.5941
windows-ai-studio|ms-|0.5.2024102408
autodocstring|njp|0.6.1
pandocciter|not|0.10.4
typst-lsp|nva|0.13.0
publisher|pos|1.2.1
shiny|Pos|1.1.0
shinyuieditor|pos|0.5.0
quarto|qua|1.116.0
r-debugger|RDe|0.5.5
java|red|1.36.0
vscode-xml|red|0.27.1
vscode-yaml|red|1.14.0
r|REd|2.8.4
multi-command|ryu|1.6.0
AudioQ|Seh|0.0.2
vscode-deepl|soe|1.1.1
abc-music|sof|0.4.0
lua|sum|3.12.0
latex-utilities|tec|0.4.14
cmake|twx|0.0.17
msft-todo-unofficial|tyl|0.0.18
vscode-terminal-here|Tyr|0.2.4
windows-terminal|Tyr|0.7.0
errorlens|use|3.16.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.2.30
vscode-conventional-commits|viv|1.26.0
vscode-arduino|vsc|0.7.1
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
zoterobackwardsearcher|vsz|0.0.1
markdown-all-in-one|yzh|3.6.2
grammarly|znc|0.25.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
01bff139:31013167
dvdeprecation:31040973
dwnewjupyter:31046869
impr_priority:31057980
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
12bdf347:31141542
iacca1:31150324
notype1:31143044
dwcopilot:31158714
g7688163:31155431
```
</details>
<!-- generated by issue reporter --> | bug,quick-chat | low | Critical |
2,635,930,709 | flutter | Navigator.pushReplacement + Navigator.pop results in assertion failure | ### Steps to reproduce
I'm working on an application which has a profile editor screen. When the app first opened, then the profile editor is the first screen to add the profile. Then it is reused when the profile is edited. This component calls a Navigator.pushReplacement on save to redirect to the home screen. It resulted in the pasted log message when it was used to create the profile and not when it was used to edit it.
I've analyzed it and I could reproduce it with the sample below. It turned out that a lower level component has called Navigator.pop after it called a callback which called pushReplacement.
I know it's dumb and I guess it should not be used like this, but it is quite easy to get into this situation when you have multiple levels of components + callbacks. I could fix it by not calling pop in the low level component.
However, I open this ticket, because I've found several other unsolved tickets with really similar messages and stack traces. So I guess other devs ran into this situation as well. Some examples
- https://github.com/flutter/flutter/issues/153217
- https://github.com/flutter/flutter/issues/149581
- https://github.com/flutter/flutter/issues/132541
- https://github.com/flutter/flutter/issues/132380
- https://github.com/flutter/flutter/issues/131086
- https://github.com/flutter/flutter/issues/82035
Steps:
1. Create a project with the provided sample
2. Run it (was tested on Debian 12 and with an android emulator)
3. Click on the save button
### Expected results
The assertion does not fail. Even if pop was called after the pushReplacement, the replace is done as expected.
### Actual results
The assertion fails in the situation described in "Steps to reproduce".
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
โโโก EXCEPTION CAUGHT BY WIDGETS LIBRARY โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The following assertion was thrown building Navigator-[GlobalObjectKey<NavigatorState>
_WidgetsAppState#100be](dirty, dependencies: [HeroControllerScope, UnmanagedRestorationScope],
state: NavigatorState#e3fa4(tickers: tracking 1 ticker)):
'package:flutter/src/widgets/navigator.dart': Failed assertion: line 5609 pos 12:
'_history.isNotEmpty': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially
more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=2_bug.yml
The relevant error-causing widget was:
MaterialApp MaterialApp:file:///home/vargab95/workspace/vargary/babybloom/lib/main.dart:94:18
When the exception was thrown, this was the stack:
#2 NavigatorState.build (package:flutter/src/widgets/navigator.dart:5609:12)
#3 StatefulElement.build (package:flutter/src/widgets/framework.dart:5729:27)
#4 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5617:15)
#5 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#6 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#7 BuildScope._tryRebuild (package:flutter/src/widgets/framework.dart:2693:15)
#8 BuildScope._flushDirtyElements (package:flutter/src/widgets/framework.dart:2752:11)
#9 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:3048:18)
#10 WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:1162:21)
#11 RendererBinding._handlePersistentFrameCallback
(package:flutter/src/rendering/binding.dart:468:5)
#12 SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:1397:15)
#13 SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:1318:9)
#14 SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:1176:5)
#15 _invoke (dart:ui/hooks.dart:312:13)
#16 PlatformDispatcher._drawFrame (dart:ui/platform_dispatcher.dart:419:5)
#17 _drawFrame (dart:ui/hooks.dart:283:31)
(elided 2 frames from class _AssertionError)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.4, on Debian GNU/Linux 12 (bookworm) 6.1.0-26-amd64, locale
en_US.UTF-8)
โข Flutter version 3.24.4 on channel stable at /home/vargab95/devtools/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (12 days ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /home/vargab95/Android/Sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: /var/lib/flatpak/app/com.google.AndroidStudio/current/active/files/extra/android-studio/jbr/bin/java
โข Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
โข All Android licenses accepted.
[โ] Chrome - develop for the web
โข CHROME_EXECUTABLE = /var/lib/flatpak/exports/bin/com.google.Chrome
[โ] Linux toolchain - develop for Linux desktop
โข Debian clang version 14.0.6
โข cmake version 3.25.1
โข ninja version 1.11.1
โข pkg-config version 1.8.1
[!] Android Studio (version unknown)
โข Android Studio at /var/lib/flatpak/app/com.google.AndroidStudio/current/active/files/extra/android-studio
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โ Unable to determine Android Studio version.
โข android-studio-dir = /var/lib/flatpak/app/com.google.AndroidStudio/current/active/files/extra/android-studio
โข Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[โ] VS Code (version 1.95.1)
โข VS Code at /usr/share/code
โข Flutter extension version 3.98.0
[โ] VS Code (version 1.94.2)
โข VS Code at /var/lib/flatpak/app/com.visualstudio.code/x86_64/stable/active/files/extra/vscode
โข Flutter extension can be installed from:
๐จ https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[โ] Connected device (3 available)
โข sdk gphone64 x86 64 (mobile) โข emulator-5554 โข android-x64 โข Android 14 (API 34) (emulator)
โข Linux (desktop) โข linux โข linux-x64 โข Debian GNU/Linux 12 (bookworm) 6.1.0-26-amd64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 130.0.6723.91
[โ] Network resources
โข All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| c: crash,framework,f: routes,a: error message,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Critical |
2,635,932,102 | node | Missing documentation for process.emit | ### Affected URL(s)
https://nodejs.org/api/process.html
### Description of the problem
inside process page there is the docs for `process.emitWarning()` but is missing for the function `process.emit()` with following signatures:
```javascript
emit(event: "beforeExit", code: number): boolean;
emit(event: "disconnect"): boolean;
emit(event: "exit", code: number): boolean;
emit(event: "rejectionHandled", promise: Promise<unknown>): boolean;
emit(event: "uncaughtException", error: Error): boolean;
emit(event: "uncaughtExceptionMonitor", error: Error): boolean;
emit(event: "unhandledRejection", reason: unknown, promise: Promise<unknown>): boolean;
emit(event: "warning", warning: Error): boolean;
emit(event: "message", message: unknown, sendHandle: unknown): this;
emit(event: Signals, signal?: Signals): boolean;
emit(
event: "multipleResolves",
type: MultipleResolveType,
promise: Promise<unknown>,
value: unknown,
): this;
emit(event: "worker", listener: WorkerListener): this;
```
is that correct? | question,doc,process | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.